monday_ruby 1.0.0 → 1.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (93) hide show
  1. checksums.yaml +4 -4
  2. data/.env +1 -1
  3. data/.rspec +0 -1
  4. data/.rubocop.yml +19 -0
  5. data/.simplecov +1 -0
  6. data/CHANGELOG.md +49 -0
  7. data/CONTRIBUTING.md +165 -0
  8. data/README.md +167 -88
  9. data/docs/.vitepress/config.mjs +255 -0
  10. data/docs/.vitepress/theme/index.js +4 -0
  11. data/docs/.vitepress/theme/style.css +43 -0
  12. data/docs/README.md +80 -0
  13. data/docs/explanation/architecture.md +507 -0
  14. data/docs/explanation/best-practices/errors.md +478 -0
  15. data/docs/explanation/best-practices/performance.md +1084 -0
  16. data/docs/explanation/best-practices/rate-limiting.md +630 -0
  17. data/docs/explanation/best-practices/testing.md +820 -0
  18. data/docs/explanation/column-values.md +857 -0
  19. data/docs/explanation/design.md +795 -0
  20. data/docs/explanation/graphql.md +356 -0
  21. data/docs/explanation/migration/v1.md +808 -0
  22. data/docs/explanation/pagination.md +447 -0
  23. data/docs/guides/advanced/batch.md +1274 -0
  24. data/docs/guides/advanced/complex-queries.md +1114 -0
  25. data/docs/guides/advanced/errors.md +818 -0
  26. data/docs/guides/advanced/pagination.md +934 -0
  27. data/docs/guides/advanced/rate-limiting.md +981 -0
  28. data/docs/guides/authentication.md +286 -0
  29. data/docs/guides/boards/create.md +386 -0
  30. data/docs/guides/boards/delete.md +405 -0
  31. data/docs/guides/boards/duplicate.md +511 -0
  32. data/docs/guides/boards/query.md +530 -0
  33. data/docs/guides/boards/update.md +453 -0
  34. data/docs/guides/columns/create.md +452 -0
  35. data/docs/guides/columns/metadata.md +492 -0
  36. data/docs/guides/columns/query.md +455 -0
  37. data/docs/guides/columns/update-multiple.md +459 -0
  38. data/docs/guides/columns/update-values.md +509 -0
  39. data/docs/guides/files/add-to-column.md +40 -0
  40. data/docs/guides/files/add-to-update.md +37 -0
  41. data/docs/guides/files/clear-column.md +33 -0
  42. data/docs/guides/first-request.md +285 -0
  43. data/docs/guides/folders/manage.md +750 -0
  44. data/docs/guides/groups/items.md +626 -0
  45. data/docs/guides/groups/manage.md +501 -0
  46. data/docs/guides/installation.md +169 -0
  47. data/docs/guides/items/create.md +493 -0
  48. data/docs/guides/items/delete.md +514 -0
  49. data/docs/guides/items/query.md +605 -0
  50. data/docs/guides/items/subitems.md +483 -0
  51. data/docs/guides/items/update.md +699 -0
  52. data/docs/guides/updates/manage.md +619 -0
  53. data/docs/guides/use-cases/dashboard.md +1421 -0
  54. data/docs/guides/use-cases/import.md +1962 -0
  55. data/docs/guides/use-cases/task-management.md +1381 -0
  56. data/docs/guides/workspaces/manage.md +502 -0
  57. data/docs/index.md +69 -0
  58. data/docs/package-lock.json +2468 -0
  59. data/docs/package.json +13 -0
  60. data/docs/reference/client.md +540 -0
  61. data/docs/reference/configuration.md +586 -0
  62. data/docs/reference/errors.md +693 -0
  63. data/docs/reference/resources/account.md +208 -0
  64. data/docs/reference/resources/activity-log.md +369 -0
  65. data/docs/reference/resources/board-view.md +359 -0
  66. data/docs/reference/resources/board.md +393 -0
  67. data/docs/reference/resources/column.md +543 -0
  68. data/docs/reference/resources/file.md +236 -0
  69. data/docs/reference/resources/folder.md +386 -0
  70. data/docs/reference/resources/group.md +507 -0
  71. data/docs/reference/resources/item.md +348 -0
  72. data/docs/reference/resources/subitem.md +267 -0
  73. data/docs/reference/resources/update.md +259 -0
  74. data/docs/reference/resources/workspace.md +213 -0
  75. data/docs/reference/response.md +560 -0
  76. data/docs/tutorial/first-integration.md +713 -0
  77. data/lib/monday/client.rb +41 -2
  78. data/lib/monday/configuration.rb +13 -0
  79. data/lib/monday/deprecation.rb +23 -0
  80. data/lib/monday/error.rb +5 -2
  81. data/lib/monday/request.rb +19 -1
  82. data/lib/monday/resources/base.rb +4 -0
  83. data/lib/monday/resources/board.rb +52 -0
  84. data/lib/monday/resources/column.rb +6 -0
  85. data/lib/monday/resources/file.rb +56 -0
  86. data/lib/monday/resources/folder.rb +55 -0
  87. data/lib/monday/resources/group.rb +66 -0
  88. data/lib/monday/resources/item.rb +62 -0
  89. data/lib/monday/util.rb +33 -1
  90. data/lib/monday/version.rb +1 -1
  91. data/lib/monday_ruby.rb +1 -0
  92. metadata +92 -11
  93. data/monday_ruby.gemspec +0 -39
@@ -0,0 +1,1084 @@
1
+ # Performance Best Practices
2
+
3
+ Performance in API integrations isn't just about speed—it's about efficiency, reliability, and cost. This guide explores the key considerations and trade-offs when building performant applications with the monday_ruby gem.
4
+
5
+ ## Performance Considerations in API Integrations
6
+
7
+ API integration performance differs fundamentally from traditional application performance.
8
+
9
+ ### What Makes API Performance Different
10
+
11
+ **Database queries**: Microseconds, local, predictable
12
+ **API calls**: Milliseconds to seconds, remote, variable
13
+
14
+ The primary performance bottleneck in API integrations is **network latency**. No amount of code optimization can eliminate the fundamental cost of:
15
+ - Network round-trip time (~10-100ms)
16
+ - API processing time (~50-500ms)
17
+ - Data serialization/deserialization (~1-10ms)
18
+
19
+ This means **reducing the number of API calls** is far more impactful than optimizing how you make those calls.
20
+
21
+ ### The Performance Triangle
22
+
23
+ You can optimize for three dimensions, but rarely all at once:
24
+
25
+ ```
26
+ Latency
27
+ /\
28
+ / \
29
+ / \
30
+ /______\
31
+ Cost Throughput
32
+ ```
33
+
34
+ **Latency**: Time to complete a single request (user-facing)
35
+ **Throughput**: Total requests processed per second (system capacity)
36
+ **Cost**: API quota, complexity budget, infrastructure
37
+
38
+ Optimizing one often degrades another:
39
+ - Lower latency → Higher cost (parallel requests use more quota)
40
+ - Higher throughput → Higher latency (queueing delays)
41
+ - Lower cost → Lower throughput (fewer requests, more caching)
42
+
43
+ Choose your priority based on your use case.
44
+
45
+ ## Query Optimization: Select Only Needed Fields
46
+
47
+ The simplest and most effective optimization is requesting only what you need.
48
+
49
+ ### The Cost of Extra Fields
50
+
51
+ Every field in a GraphQL query has a cost:
52
+
53
+ ```ruby
54
+ # Expensive: 500+ complexity points
55
+ client.board.query(
56
+ ids: [board_id],
57
+ select: [
58
+ 'id', 'name', 'description', 'state', 'board_kind',
59
+ 'board_folder_id', 'permissions', 'type', 'owner',
60
+ { 'groups' => [
61
+ 'id', 'title', 'color', 'position',
62
+ { 'items' => [
63
+ 'id', 'name', 'state', 'created_at', 'updated_at',
64
+ { 'column_values' => ['id', 'text', 'value', 'type'] }
65
+ ]}
66
+ ]}
67
+ ]
68
+ )
69
+
70
+ # Efficient: 50 complexity points
71
+ client.board.query(
72
+ ids: [board_id],
73
+ select: ['id', 'name']
74
+ )
75
+ ```
76
+
77
+ **Impact**: 10x complexity reduction → 10x more queries within rate limit
78
+
79
+ ### Field Selection Strategy
80
+
81
+ **Start minimal, add incrementally:**
82
+
83
+ ```ruby
84
+ # Step 1: Identify minimum needed
85
+ select: ['id', 'name'] # Just need to display board name
86
+
87
+ # Step 2: Add only when required
88
+ select: ['id', 'name', 'state'] # Need to filter by state
89
+
90
+ # Step 3: Add nested data carefully
91
+ select: ['id', 'name', { 'groups' => ['id', 'title'] }] # Need group names
92
+ ```
93
+
94
+ **Don't guess**: Use logging to identify unnecessary fields:
95
+
96
+ ```ruby
97
+ def fetch_board_data(board_id)
98
+ response = client.board.query(ids: [board_id], select: fields)
99
+
100
+ # Log which fields are actually used
101
+ used_fields = track_field_access(response)
102
+ logger.debug("Fields requested: #{fields}")
103
+ logger.debug("Fields actually used: #{used_fields}")
104
+
105
+ response
106
+ end
107
+ ```
108
+
109
+ If you request 20 fields but only use 5, you're wasting complexity budget.
110
+
111
+ ### Template Queries for Common Use Cases
112
+
113
+ Define reusable field sets:
114
+
115
+ ```ruby
116
+ module MondayQueries
117
+ BOARD_SUMMARY = ['id', 'name', 'state'].freeze
118
+
119
+ BOARD_DETAILED = [
120
+ 'id', 'name', 'description', 'state',
121
+ { 'groups' => ['id', 'title'] }
122
+ ].freeze
123
+
124
+ BOARD_WITH_ITEMS = [
125
+ 'id', 'name',
126
+ { 'groups' => [
127
+ 'id', 'title',
128
+ { 'items' => ['id', 'name'] }
129
+ ]}
130
+ ].freeze
131
+ end
132
+
133
+ # Use templates
134
+ client.board.query(
135
+ ids: [board_id],
136
+ select: MondayQueries::BOARD_SUMMARY
137
+ )
138
+ ```
139
+
140
+ This ensures consistency and makes optimization easier (change template, all queries improve).
141
+
142
+ ## Pagination Strategies for Large Datasets
143
+
144
+ Fetching large datasets requires pagination. The strategy you choose dramatically impacts performance.
145
+
146
+ ### Strategy 1: Offset Pagination (Simple)
147
+
148
+ Request data in fixed-size pages:
149
+
150
+ ```ruby
151
+ def fetch_all_items_offset(board_id)
152
+ all_items = []
153
+ page = 1
154
+ limit = 50
155
+
156
+ loop do
157
+ items = client.item.query_by_board(
158
+ board_id: board_id,
159
+ limit: limit,
160
+ page: page
161
+ ).dig('data', 'items')
162
+
163
+ break if items.empty?
164
+
165
+ all_items.concat(items)
166
+ page += 1
167
+ end
168
+
169
+ all_items
170
+ end
171
+ ```
172
+
173
+ **Pros:**
174
+ - Simple to implement
175
+ - Can jump to any page
176
+ - Familiar pattern
177
+
178
+ **Cons:**
179
+ - Slower for large datasets (database skips over offset records)
180
+ - Inconsistent results if data changes during pagination
181
+ - Higher complexity for later pages
182
+
183
+ **When to use**: Small to medium datasets (<1000 records), random page access needed
184
+
185
+ ### Strategy 2: Cursor Pagination (Efficient)
186
+
187
+ Use a cursor to track position:
188
+
189
+ ```ruby
190
+ def fetch_all_items_cursor(board_id)
191
+ all_items = []
192
+ cursor = nil
193
+
194
+ loop do
195
+ response = client.item.query_by_board(
196
+ board_id: board_id,
197
+ limit: 50,
198
+ cursor: cursor
199
+ )
200
+
201
+ items = response.dig('data', 'items')
202
+ break if items.empty?
203
+
204
+ all_items.concat(items)
205
+ cursor = response.dig('data', 'cursor') # Next page cursor
206
+ break unless cursor
207
+ end
208
+
209
+ all_items
210
+ end
211
+ ```
212
+
213
+ **Pros:**
214
+ - Efficient for large datasets (database uses index)
215
+ - Consistent results during pagination
216
+ - Lower complexity
217
+
218
+ **Cons:**
219
+ - Can't jump to arbitrary pages
220
+ - More complex to implement
221
+ - Not supported by all monday.com endpoints
222
+
223
+ **When to use**: Large datasets (>1000 records), sequential access patterns
224
+
225
+ ### Strategy 3: Parallel Pagination (Fast)
226
+
227
+ Fetch multiple pages simultaneously:
228
+
229
+ ```ruby
230
+ def fetch_all_items_parallel(board_id, total_pages: 10)
231
+ # Fetch first 10 pages in parallel
232
+ responses = (1..total_pages).map do |page|
233
+ Thread.new do
234
+ client.item.query_by_board(
235
+ board_id: board_id,
236
+ limit: 50,
237
+ page: page
238
+ )
239
+ end
240
+ end.map(&:value)
241
+
242
+ responses.flat_map { |r| r.dig('data', 'items') || [] }
243
+ end
244
+ ```
245
+
246
+ **Pros:**
247
+ - Much faster (parallel network requests)
248
+ - Good for bounded datasets
249
+
250
+ **Cons:**
251
+ - Higher rate limit consumption (burst of requests)
252
+ - More complex error handling
253
+ - Requires knowing total pages upfront
254
+
255
+ **When to use**: Known dataset size, latency-critical operations, ample rate limit budget
256
+
257
+ ### Strategy 4: Adaptive Pagination (Smart)
258
+
259
+ Adjust page size based on performance:
260
+
261
+ ```ruby
262
+ def fetch_all_items_adaptive(board_id)
263
+ all_items = []
264
+ page = 1
265
+ limit = 25 # Start conservative
266
+
267
+ loop do
268
+ start_time = Time.now
269
+ items = client.item.query_by_board(
270
+ board_id: board_id,
271
+ limit: limit,
272
+ page: page
273
+ ).dig('data', 'items')
274
+
275
+ duration = Time.now - start_time
276
+
277
+ break if items.empty?
278
+ all_items.concat(items)
279
+
280
+ # Adapt page size based on response time
281
+ if duration < 0.5
282
+ limit = [limit * 2, 100].min # Increase if fast
283
+ elsif duration > 2.0
284
+ limit = [limit / 2, 10].max # Decrease if slow
285
+ end
286
+
287
+ page += 1
288
+ end
289
+
290
+ all_items
291
+ end
292
+ ```
293
+
294
+ **Pros:**
295
+ - Self-optimizing
296
+ - Handles variable performance
297
+ - Balances speed and reliability
298
+
299
+ **Cons:**
300
+ - Complex implementation
301
+ - Unpredictable behavior
302
+ - May oscillate under variable load
303
+
304
+ **When to use**: Highly variable dataset sizes or API performance
305
+
306
+ ### Choosing a Pagination Strategy
307
+
308
+ | Dataset Size | Access Pattern | Rate Limit | Strategy |
309
+ |--------------|----------------|------------|----------|
310
+ | <500 items | Full scan | Ample | Offset, large pages |
311
+ | <500 items | Random access | Limited | Offset, small pages |
312
+ | 500-5000 items | Full scan | Ample | Parallel offset |
313
+ | 500-5000 items | Full scan | Limited | Cursor |
314
+ | >5000 items | Full scan | Any | Cursor |
315
+ | >5000 items | Recent items | Any | Cursor, stop early |
316
+
317
+ ## Batching Operations Efficiently
318
+
319
+ Batching reduces API calls by combining multiple operations.
320
+
321
+ ### Request Batching
322
+
323
+ Fetch multiple resources in one request:
324
+
325
+ ```ruby
326
+ # Inefficient: N+1 queries
327
+ board_ids.each do |board_id|
328
+ client.board.query(ids: [board_id]) # 10 boards = 10 API calls
329
+ end
330
+
331
+ # Efficient: Single batched query
332
+ client.board.query(ids: board_ids) # 10 boards = 1 API call
333
+ ```
334
+
335
+ **Impact**: 10x reduction in API calls, 10x reduction in latency (eliminate 9 round-trips)
336
+
337
+ ### Batch Size Considerations
338
+
339
+ Bigger batches aren't always better:
340
+
341
+ ```ruby
342
+ # Too small: Many API calls
343
+ item_ids.each_slice(5) do |batch|
344
+ client.item.query(ids: batch) # 100 items = 20 calls
345
+ end
346
+
347
+ # Too large: High complexity, timeouts
348
+ client.item.query(ids: item_ids) # 1000 items = 1 call, but may timeout
349
+
350
+ # Optimal: Balance efficiency and reliability
351
+ item_ids.each_slice(50) do |batch|
352
+ client.item.query(ids: batch) # 100 items = 2 calls
353
+ end
354
+ ```
355
+
356
+ **Optimal batch size**: 25-100 items (depends on complexity of fields requested)
357
+
358
+ ### Mutation Batching
359
+
360
+ Some mutations can be batched:
361
+
362
+ ```ruby
363
+ # If API supports batch mutations
364
+ updates = [
365
+ { item_id: 1, column_values: { status: 'Done' } },
366
+ { item_id: 2, column_values: { status: 'Done' } },
367
+ { item_id: 3, column_values: { status: 'Done' } }
368
+ ]
369
+
370
+ # Check if monday.com API supports batch mutations for your use case
371
+ client.item.batch_update(updates) # Single API call
372
+ ```
373
+
374
+ **Note**: Not all mutations support batching. Check API documentation.
375
+
376
+ ### Temporal Batching (Debouncing)
377
+
378
+ Collect requests over time, then batch:
379
+
380
+ ```ruby
381
+ class RequestBatcher
382
+ def initialize(window: 1.0)
383
+ @window = window
384
+ @pending = []
385
+ @timer = nil
386
+ end
387
+
388
+ def add(item_id)
389
+ @pending << item_id
390
+ schedule_flush
391
+ end
392
+
393
+ private
394
+
395
+ def schedule_flush
396
+ return if @timer
397
+
398
+ @timer = Thread.new do
399
+ sleep(@window)
400
+ flush
401
+ end
402
+ end
403
+
404
+ def flush
405
+ return if @pending.empty?
406
+
407
+ batch = @pending.dup
408
+ @pending.clear
409
+ @timer = nil
410
+
411
+ client.item.query(ids: batch)
412
+ end
413
+ end
414
+
415
+ # Usage: Collect IDs over 1 second, then fetch in batch
416
+ batcher = RequestBatcher.new(window: 1.0)
417
+ batcher.add(item_id_1)
418
+ batcher.add(item_id_2)
419
+ batcher.add(item_id_3)
420
+ # After 1 second: single API call with all 3 IDs
421
+ ```
422
+
423
+ **Use case**: High-frequency updates (webhooks, real-time sync)
424
+
425
+ ## Caching Strategies and Invalidation
426
+
427
+ Caching eliminates API calls entirely—the ultimate optimization.
428
+
429
+ ### What to Cache
430
+
431
+ **High-value cache candidates:**
432
+ - Reference data (rarely changes, frequently accessed)
433
+ - Computed results (expensive to generate)
434
+ - Rate limit state (prevent redundant checks)
435
+
436
+ **Poor cache candidates:**
437
+ - Real-time data (stale data causes issues)
438
+ - User-specific data (cache hit rate too low)
439
+ - Large datasets (memory constraints)
440
+
441
+ ### Cache Layers
442
+
443
+ Implement multiple cache layers:
444
+
445
+ ```ruby
446
+ # Layer 1: In-memory (fastest, smallest)
447
+ @board_cache ||= {}
448
+
449
+ # Layer 2: Redis (fast, shared across processes)
450
+ Rails.cache # Configured to use Redis
451
+
452
+ # Layer 3: Database (slower, persistent)
453
+ CachedBoard.find_by(monday_id: board_id)
454
+
455
+ # Layer 4: API (slowest, source of truth)
456
+ client.board.query(ids: [board_id])
457
+ ```
458
+
459
+ Check layers in order, falling through to API only if all caches miss:
460
+
461
+ ```ruby
462
+ def get_board_with_multilayer_cache(board_id)
463
+ # Layer 1: In-memory
464
+ return @board_cache[board_id] if @board_cache[board_id]
465
+
466
+ # Layer 2: Redis
467
+ cached = Rails.cache.read("board_#{board_id}")
468
+ if cached
469
+ @board_cache[board_id] = cached
470
+ return cached
471
+ end
472
+
473
+ # Layer 3: Database
474
+ db_cached = CachedBoard.find_by(monday_id: board_id)
475
+ if db_cached && db_cached.fresh?
476
+ Rails.cache.write("board_#{board_id}", db_cached.data, expires_in: 1.hour)
477
+ @board_cache[board_id] = db_cached.data
478
+ return db_cached.data
479
+ end
480
+
481
+ # Layer 4: API
482
+ fresh_data = client.board.query(ids: [board_id])
483
+
484
+ # Populate all caches
485
+ @board_cache[board_id] = fresh_data
486
+ Rails.cache.write("board_#{board_id}", fresh_data, expires_in: 1.hour)
487
+ CachedBoard.upsert(monday_id: board_id, data: fresh_data)
488
+
489
+ fresh_data
490
+ end
491
+ ```
492
+
493
+ ### Cache Invalidation Strategies
494
+
495
+ #### 1. Time-Based (TTL)
496
+
497
+ Simplest: Cache expires after fixed duration:
498
+
499
+ ```ruby
500
+ Rails.cache.fetch("board_#{board_id}", expires_in: 1.hour) do
501
+ client.board.query(ids: [board_id])
502
+ end
503
+ ```
504
+
505
+ **Pros:** Simple, no invalidation logic needed
506
+ **Cons:** Data can be stale for full TTL period
507
+
508
+ **Choosing TTL:**
509
+ - Static data: 24 hours - 1 week
510
+ - Slow-changing data: 1-6 hours
511
+ - Moderate data: 5-60 minutes
512
+ - Fast-changing data: 30 seconds - 5 minutes
513
+
514
+ #### 2. Write-Through Invalidation
515
+
516
+ Invalidate cache when data changes:
517
+
518
+ ```ruby
519
+ def update_board(board_id, attributes)
520
+ result = client.board.update(board_id: board_id, **attributes)
521
+
522
+ # Invalidate cache immediately
523
+ Rails.cache.delete("board_#{board_id}")
524
+ @board_cache.delete(board_id)
525
+
526
+ result
527
+ end
528
+ ```
529
+
530
+ **Pros:** Data always fresh after updates
531
+ **Cons:** Doesn't handle external changes (updates from monday.com UI)
532
+
533
+ #### 3. Webhook-Based Invalidation
534
+
535
+ Listen for monday.com webhooks to invalidate:
536
+
537
+ ```ruby
538
+ # Webhook endpoint
539
+ post '/webhooks/monday' do
540
+ event = JSON.parse(request.body.read)
541
+
542
+ case event['type']
543
+ when 'update_board'
544
+ Rails.cache.delete("board_#{event['board_id']}")
545
+ when 'update_item'
546
+ # Invalidate board cache (item count may have changed)
547
+ Rails.cache.delete("board_items_#{event['board_id']}")
548
+ end
549
+ end
550
+ ```
551
+
552
+ **Pros:** Invalidates based on actual changes
553
+ **Cons:** Requires webhook setup, network reliability
554
+
555
+ #### 4. Background Refresh
556
+
557
+ Refresh cache before expiry (always fresh, no cache misses):
558
+
559
+ ```ruby
560
+ class BoardCacheRefresher
561
+ def perform
562
+ Board.find_each do |board|
563
+ fresh_data = client.board.query(ids: [board.monday_id])
564
+ Rails.cache.write("board_#{board.monday_id}", fresh_data, expires_in: 1.hour)
565
+ end
566
+ end
567
+ end
568
+
569
+ # Schedule every 30 minutes (before 1-hour TTL expires)
570
+ ```
571
+
572
+ **Pros:** No cache misses, always fresh data
573
+ **Cons:** Continuous API usage, wasted refreshes for unused data
574
+
575
+ ### Cache Key Design
576
+
577
+ Good cache keys prevent collisions and enable targeted invalidation:
578
+
579
+ ```ruby
580
+ # Bad: Global cache (hard to invalidate)
581
+ Rails.cache.fetch('boards') { ... }
582
+
583
+ # Good: Specific cache with identifiers
584
+ Rails.cache.fetch("board:#{board_id}:v1") { ... }
585
+
586
+ # Better: Include query parameters
587
+ Rails.cache.fetch("board:#{board_id}:fields:#{fields.hash}:v1") { ... }
588
+
589
+ # Best: Versioned with dependencies
590
+ Rails.cache.fetch("board:#{board_id}:user:#{user_id}:v2") { ... }
591
+ ```
592
+
593
+ Include versions (`v1`, `v2`) to invalidate all caches when schema changes.
594
+
595
+ ## Connection Pooling and Timeouts
596
+
597
+ Managing HTTP connections affects both performance and reliability.
598
+
599
+ ### Connection Pooling
600
+
601
+ Reuse HTTP connections instead of creating new ones:
602
+
603
+ ```ruby
604
+ # Without pooling: New connection per request (slow)
605
+ Net::HTTP.start(uri.host, uri.port) do |http|
606
+ http.request(request)
607
+ end
608
+
609
+ # With pooling: Reuse existing connections (fast)
610
+ @connection_pool ||= ConnectionPool.new(size: 10) do
611
+ Net::HTTP.start(uri.host, uri.port, use_ssl: true)
612
+ end
613
+
614
+ @connection_pool.with do |http|
615
+ http.request(request)
616
+ end
617
+ ```
618
+
619
+ **Benefits:**
620
+ - Eliminate connection overhead (~50-100ms per connection)
621
+ - Reduce server load
622
+ - Better throughput
623
+
624
+ **Pool size considerations:**
625
+ - Too small: Threads wait for available connections
626
+ - Too large: Excessive memory usage, server connection limits
627
+ - Rule of thumb: 5-10 per worker process
628
+
629
+ ### Timeout Configuration
630
+
631
+ Timeouts prevent indefinite waiting:
632
+
633
+ ```ruby
634
+ http = Net::HTTP.new(uri.host, uri.port)
635
+ http.open_timeout = 5 # Time to establish connection
636
+ http.read_timeout = 30 # Time to read response
637
+ http.write_timeout = 10 # Time to send request
638
+
639
+ begin
640
+ http.request(request)
641
+ rescue Net::OpenTimeout
642
+ # Connection couldn't be established
643
+ retry_with_backoff
644
+ rescue Net::ReadTimeout
645
+ # Request sent but response took too long
646
+ log_slow_request
647
+ raise
648
+ end
649
+ ```
650
+
651
+ **Timeout values:**
652
+ - **Open timeout**: 3-5 seconds (connection should be fast)
653
+ - **Read timeout**: 30-60 seconds (complex queries take time)
654
+ - **Write timeout**: 10-15 seconds (uploads can be slow)
655
+
656
+ **Trade-offs:**
657
+ - Short timeouts: Fail fast, better user experience, may abort valid slow requests
658
+ - Long timeouts: More reliable, but users wait longer for errors
659
+
660
+ ## Async Processing Patterns
661
+
662
+ Asynchronous processing decouples API calls from user requests.
663
+
664
+ ### Background Jobs
665
+
666
+ Move API calls to background:
667
+
668
+ ```ruby
669
+ # Synchronous (user waits)
670
+ def sync_board
671
+ client.board.query(ids: [board_id]) # User waits for API call
672
+ render json: { status: 'synced' }
673
+ end
674
+
675
+ # Asynchronous (user doesn't wait)
676
+ def sync_board
677
+ SyncBoardJob.perform_async(board_id) # Queue job
678
+ render json: { status: 'queued' } # Immediate response
679
+ end
680
+
681
+ # Background job
682
+ class SyncBoardJob
683
+ include Sidekiq::Worker
684
+
685
+ def perform(board_id)
686
+ client.board.query(ids: [board_id]) # Runs in background
687
+ end
688
+ end
689
+ ```
690
+
691
+ **Benefits:**
692
+ - Immediate user response
693
+ - Retry on failure
694
+ - Rate limit management (queue throttling)
695
+
696
+ **Drawbacks:**
697
+ - User doesn't see immediate results
698
+ - Requires job infrastructure (Sidekiq, Redis)
699
+
700
+ ### Async I/O
701
+
702
+ Use async HTTP libraries for concurrent requests:
703
+
704
+ ```ruby
705
+ require 'async'
706
+ require 'async/http/internet'
707
+
708
+ Async do
709
+ internet = Async::HTTP::Internet.new
710
+
711
+ # Fetch multiple boards concurrently
712
+ tasks = board_ids.map do |board_id|
713
+ Async do
714
+ response = internet.get("https://api.monday.com/v2/boards/#{board_id}")
715
+ JSON.parse(response.read)
716
+ end
717
+ end
718
+
719
+ # Wait for all to complete
720
+ results = tasks.map(&:wait)
721
+ end
722
+ ```
723
+
724
+ **Benefits:**
725
+ - Concurrent I/O without threads
726
+ - Lower memory overhead
727
+ - Efficient for I/O-bound operations
728
+
729
+ **Drawbacks:**
730
+ - Different programming model
731
+ - Library compatibility issues
732
+
733
+ ### Webhooks Instead of Polling
734
+
735
+ Replace polling with webhooks:
736
+
737
+ ```ruby
738
+ # Polling (inefficient)
739
+ loop do
740
+ response = client.board.query(ids: [board_id])
741
+ check_for_changes(response)
742
+ sleep(60) # API call every minute
743
+ end
744
+
745
+ # Webhooks (efficient)
746
+ post '/webhooks/monday' do
747
+ event = JSON.parse(request.body.read)
748
+ handle_change(event) # Only called when actual changes occur
749
+ end
750
+ ```
751
+
752
+ **Benefits:**
753
+ - Zero polling overhead
754
+ - Instant notifications
755
+ - Dramatic reduction in API calls
756
+
757
+ **Drawbacks:**
758
+ - Requires public endpoint
759
+ - Network reliability dependency
760
+ - Initial setup complexity
761
+
762
+ ## Memory Management with Large Responses
763
+
764
+ Large API responses can cause memory issues.
765
+
766
+ ### Streaming Responses
767
+
768
+ Process data incrementally instead of loading all at once:
769
+
770
+ ```ruby
771
+ # Bad: Load entire response into memory
772
+ response = client.board.query(ids: board_ids) # Could be 100MB
773
+ all_items = response.dig('data', 'boards').flat_map { |b| b['items'] }
774
+
775
+ # Good: Process in chunks
776
+ board_ids.each_slice(10) do |batch|
777
+ response = client.board.query(ids: batch) # Smaller responses
778
+ items = response.dig('data', 'boards').flat_map { |b| b['items'] }
779
+
780
+ process_items(items) # Process and release memory
781
+ GC.start # Force garbage collection if needed
782
+ end
783
+ ```
784
+
785
+ ### Lazy Evaluation
786
+
787
+ Use enumerators for on-demand loading:
788
+
789
+ ```ruby
790
+ def item_enumerator(board_id)
791
+ Enumerator.new do |yielder|
792
+ page = 1
793
+
794
+ loop do
795
+ items = client.item.query_by_board(
796
+ board_id: board_id,
797
+ page: page,
798
+ limit: 50
799
+ ).dig('data', 'items')
800
+
801
+ break if items.empty?
802
+
803
+ items.each { |item| yielder << item }
804
+ page += 1
805
+ end
806
+ end
807
+ end
808
+
809
+ # Usage: Only loads pages as needed
810
+ item_enumerator(board_id).each do |item|
811
+ process_item(item) # Items processed one at a time
812
+ end
813
+ ```
814
+
815
+ ### JSON Streaming Parsers
816
+
817
+ Parse JSON incrementally:
818
+
819
+ ```ruby
820
+ require 'json/stream'
821
+
822
+ # Instead of JSON.parse(huge_response)
823
+ parser = JSON::Stream::Parser.new do
824
+ start_object { |key| }
825
+ end_object { |key| process_object(@current_object) }
826
+ key { |k| @current_key = k }
827
+ value { |v| @current_object[@current_key] = v }
828
+ end
829
+
830
+ response.body.each_chunk do |chunk|
831
+ parser << chunk # Parse incrementally
832
+ end
833
+ ```
834
+
835
+ **Use when**: Responses >10MB, memory-constrained environments
836
+
837
+ ## Monitoring and Profiling API Usage
838
+
839
+ You can't improve what you don't measure.
840
+
841
+ ### Instrumentation
842
+
843
+ Add instrumentation to API calls:
844
+
845
+ ```ruby
846
+ class Monday::Client
847
+ def make_request(query)
848
+ start_time = Time.now
849
+ complexity_estimate = estimate_complexity(query)
850
+
851
+ begin
852
+ response = super(query)
853
+ duration = Time.now - start_time
854
+
855
+ log_metrics(
856
+ duration: duration,
857
+ complexity: complexity_estimate,
858
+ success: true
859
+ )
860
+
861
+ response
862
+ rescue Monday::Error => e
863
+ log_metrics(
864
+ duration: Time.now - start_time,
865
+ complexity: complexity_estimate,
866
+ success: false,
867
+ error_class: e.class.name
868
+ )
869
+ raise
870
+ end
871
+ end
872
+
873
+ private
874
+
875
+ def log_metrics(metrics)
876
+ logger.info("Monday API call: #{metrics.to_json}")
877
+
878
+ # Send to monitoring system
879
+ StatsD.timing('monday.api.duration', metrics[:duration])
880
+ StatsD.gauge('monday.api.complexity', metrics[:complexity])
881
+ StatsD.increment("monday.api.#{metrics[:success] ? 'success' : 'error'}")
882
+ end
883
+ end
884
+ ```
885
+
886
+ ### Key Metrics to Track
887
+
888
+ 1. **Latency percentiles**: p50, p95, p99 response times
889
+ 2. **Error rate**: Percentage of failed requests
890
+ 3. **Complexity usage**: Total complexity consumed per time window
891
+ 4. **Rate limit hits**: How often hitting rate limits
892
+ 5. **Cache hit rate**: Percentage of requests served from cache
893
+ 6. **Throughput**: Requests per second
894
+
895
+ ### Profiling Bottlenecks
896
+
897
+ Use Ruby profiling tools:
898
+
899
+ ```ruby
900
+ require 'benchmark'
901
+
902
+ result = Benchmark.measure do
903
+ client.board.query(ids: board_ids)
904
+ end
905
+
906
+ puts "API call took #{result.real} seconds"
907
+
908
+ # Or use more detailed profiling
909
+ require 'ruby-prof'
910
+
911
+ RubyProf.start
912
+ sync_all_boards
913
+ result = RubyProf.stop
914
+
915
+ printer = RubyProf::FlatPrinter.new(result)
916
+ printer.print(STDOUT)
917
+ ```
918
+
919
+ ### Alerting
920
+
921
+ Set up alerts for performance degradation:
922
+
923
+ ```ruby
924
+ class PerformanceMonitor
925
+ def check_api_performance
926
+ avg_duration = get_average_duration(last: 5.minutes)
927
+
928
+ if avg_duration > 2.0
929
+ alert("Monday API latency elevated: #{avg_duration}s average")
930
+ end
931
+
932
+ error_rate = get_error_rate(last: 5.minutes)
933
+
934
+ if error_rate > 0.05
935
+ alert("Monday API error rate elevated: #{error_rate * 100}%")
936
+ end
937
+ end
938
+ end
939
+ ```
940
+
941
+ ## Trade-offs: Latency vs Throughput vs Cost
942
+
943
+ Different optimization strategies prioritize different dimensions.
944
+
945
+ ### Optimizing for Latency (User Experience)
946
+
947
+ **Goal**: Minimize time to complete individual requests
948
+
949
+ **Strategies:**
950
+ - Parallel requests (fetch multiple resources simultaneously)
951
+ - Aggressive caching (serve from cache even if slightly stale)
952
+ - Request only essential fields
953
+ - Use CDN for static assets
954
+
955
+ **Trade-offs:**
956
+ - Higher cost (more API calls, bigger caches)
957
+ - Lower throughput (parallel requests consume more resources)
958
+
959
+ **Example:**
960
+ ```ruby
961
+ # Fetch board and items in parallel
962
+ board_thread = Thread.new { client.board.query(ids: [board_id]) }
963
+ items_thread = Thread.new { client.item.query_by_board(board_id: board_id) }
964
+
965
+ board = board_thread.value
966
+ items = items_thread.value
967
+ # Result: ~2x faster than sequential
968
+ ```
969
+
970
+ ### Optimizing for Throughput (System Capacity)
971
+
972
+ **Goal**: Process maximum requests per second
973
+
974
+ **Strategies:**
975
+ - Queue requests, process in batches
976
+ - Connection pooling
977
+ - Async I/O
978
+ - Distributed processing
979
+
980
+ **Trade-offs:**
981
+ - Higher latency (queuing delays)
982
+ - More complex infrastructure
983
+
984
+ **Example:**
985
+ ```ruby
986
+ # Queue and batch process
987
+ class BoardSyncQueue
988
+ def self.add(board_id)
989
+ QUEUE << board_id
990
+ end
991
+
992
+ def self.process
993
+ while QUEUE.any?
994
+ batch = QUEUE.pop(100) # Process 100 at a time
995
+ client.board.query(ids: batch)
996
+ end
997
+ end
998
+ end
999
+ # Result: 100x fewer API calls, but individual requests slower
1000
+ ```
1001
+
1002
+ ### Optimizing for Cost (Efficiency)
1003
+
1004
+ **Goal**: Minimize API quota usage and infrastructure costs
1005
+
1006
+ **Strategies:**
1007
+ - Aggressive caching (long TTLs)
1008
+ - Batch operations
1009
+ - Request minimal fields
1010
+ - Lazy loading (only fetch when needed)
1011
+
1012
+ **Trade-offs:**
1013
+ - Stale data (long cache TTLs)
1014
+ - Higher latency (no parallel requests)
1015
+ - Lower throughput (sequential processing)
1016
+
1017
+ **Example:**
1018
+ ```ruby
1019
+ # Cache with long TTL, minimal fields
1020
+ Rails.cache.fetch("board_#{board_id}", expires_in: 24.hours) do
1021
+ client.board.query(
1022
+ ids: [board_id],
1023
+ select: ['id', 'name'] # Only essential fields
1024
+ )
1025
+ end
1026
+ # Result: Minimal API usage, but data up to 24 hours stale
1027
+ ```
1028
+
1029
+ ### Balancing the Triangle
1030
+
1031
+ Most applications need a balance:
1032
+
1033
+ ```ruby
1034
+ class BalancedBoardFetcher
1035
+ def fetch(board_id, strategy: :balanced)
1036
+ case strategy
1037
+ when :fast
1038
+ fetch_parallel_with_short_cache(board_id)
1039
+ when :efficient
1040
+ fetch_sequential_with_long_cache(board_id)
1041
+ when :balanced
1042
+ fetch_sequential_with_medium_cache(board_id)
1043
+ end
1044
+ end
1045
+
1046
+ private
1047
+
1048
+ def fetch_parallel_with_short_cache(board_id)
1049
+ # Optimize for latency
1050
+ Rails.cache.fetch("board_#{board_id}", expires_in: 5.minutes) do
1051
+ # Parallel fetching, full fields
1052
+ end
1053
+ end
1054
+
1055
+ def fetch_sequential_with_long_cache(board_id)
1056
+ # Optimize for cost
1057
+ Rails.cache.fetch("board_#{board_id}", expires_in: 24.hours) do
1058
+ # Sequential, minimal fields
1059
+ end
1060
+ end
1061
+
1062
+ def fetch_sequential_with_medium_cache(board_id)
1063
+ # Balance
1064
+ Rails.cache.fetch("board_#{board_id}", expires_in: 1.hour) do
1065
+ # Sequential, necessary fields
1066
+ end
1067
+ end
1068
+ end
1069
+ ```
1070
+
1071
+ ## Key Takeaways
1072
+
1073
+ 1. **Reduce API calls first**: Biggest performance gain comes from fewer requests
1074
+ 2. **Select only needed fields**: Every field has a complexity cost
1075
+ 3. **Paginate intelligently**: Choose strategy based on dataset size and access pattern
1076
+ 4. **Batch operations**: Combine multiple operations into single requests
1077
+ 5. **Cache strategically**: Multi-layer caching with appropriate TTLs
1078
+ 6. **Use connection pooling**: Reuse connections to eliminate overhead
1079
+ 7. **Go async for scale**: Background jobs and async I/O for high-volume operations
1080
+ 8. **Manage memory**: Stream large responses, use lazy evaluation
1081
+ 9. **Monitor everything**: Track latency, throughput, errors, and complexity usage
1082
+ 10. **Know your priorities**: Optimize for latency, throughput, or cost based on your needs
1083
+
1084
+ Performance optimization is an ongoing process. Start with the biggest bottlenecks (usually number of API calls), measure the impact, and iterate.