monday_ruby 1.1.0 → 1.2.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/.env +1 -1
- data/.rubocop.yml +2 -1
- data/CHANGELOG.md +14 -0
- data/CONTRIBUTING.md +104 -0
- data/README.md +146 -142
- data/docs/.vitepress/config.mjs +255 -0
- data/docs/.vitepress/theme/index.js +4 -0
- data/docs/.vitepress/theme/style.css +43 -0
- data/docs/README.md +80 -0
- data/docs/explanation/architecture.md +507 -0
- data/docs/explanation/best-practices/errors.md +478 -0
- data/docs/explanation/best-practices/performance.md +1084 -0
- data/docs/explanation/best-practices/rate-limiting.md +630 -0
- data/docs/explanation/best-practices/testing.md +820 -0
- data/docs/explanation/column-values.md +857 -0
- data/docs/explanation/design.md +795 -0
- data/docs/explanation/graphql.md +356 -0
- data/docs/explanation/migration/v1.md +808 -0
- data/docs/explanation/pagination.md +447 -0
- data/docs/guides/advanced/batch.md +1274 -0
- data/docs/guides/advanced/complex-queries.md +1114 -0
- data/docs/guides/advanced/errors.md +818 -0
- data/docs/guides/advanced/pagination.md +934 -0
- data/docs/guides/advanced/rate-limiting.md +981 -0
- data/docs/guides/authentication.md +286 -0
- data/docs/guides/boards/create.md +386 -0
- data/docs/guides/boards/delete.md +405 -0
- data/docs/guides/boards/duplicate.md +511 -0
- data/docs/guides/boards/query.md +530 -0
- data/docs/guides/boards/update.md +453 -0
- data/docs/guides/columns/create.md +452 -0
- data/docs/guides/columns/metadata.md +492 -0
- data/docs/guides/columns/query.md +455 -0
- data/docs/guides/columns/update-multiple.md +459 -0
- data/docs/guides/columns/update-values.md +509 -0
- data/docs/guides/files/add-to-column.md +40 -0
- data/docs/guides/files/add-to-update.md +37 -0
- data/docs/guides/files/clear-column.md +33 -0
- data/docs/guides/first-request.md +285 -0
- data/docs/guides/folders/manage.md +750 -0
- data/docs/guides/groups/items.md +626 -0
- data/docs/guides/groups/manage.md +501 -0
- data/docs/guides/installation.md +169 -0
- data/docs/guides/items/create.md +493 -0
- data/docs/guides/items/delete.md +514 -0
- data/docs/guides/items/query.md +605 -0
- data/docs/guides/items/subitems.md +483 -0
- data/docs/guides/items/update.md +699 -0
- data/docs/guides/updates/manage.md +619 -0
- data/docs/guides/use-cases/dashboard.md +1421 -0
- data/docs/guides/use-cases/import.md +1962 -0
- data/docs/guides/use-cases/task-management.md +1381 -0
- data/docs/guides/workspaces/manage.md +502 -0
- data/docs/index.md +69 -0
- data/docs/package-lock.json +2468 -0
- data/docs/package.json +13 -0
- data/docs/reference/client.md +540 -0
- data/docs/reference/configuration.md +586 -0
- data/docs/reference/errors.md +693 -0
- data/docs/reference/resources/account.md +208 -0
- data/docs/reference/resources/activity-log.md +369 -0
- data/docs/reference/resources/board-view.md +359 -0
- data/docs/reference/resources/board.md +393 -0
- data/docs/reference/resources/column.md +543 -0
- data/docs/reference/resources/file.md +236 -0
- data/docs/reference/resources/folder.md +386 -0
- data/docs/reference/resources/group.md +507 -0
- data/docs/reference/resources/item.md +348 -0
- data/docs/reference/resources/subitem.md +267 -0
- data/docs/reference/resources/update.md +259 -0
- data/docs/reference/resources/workspace.md +213 -0
- data/docs/reference/response.md +560 -0
- data/docs/tutorial/first-integration.md +713 -0
- data/lib/monday/client.rb +24 -0
- data/lib/monday/configuration.rb +5 -0
- data/lib/monday/request.rb +15 -0
- data/lib/monday/resources/base.rb +4 -0
- data/lib/monday/resources/file.rb +56 -0
- data/lib/monday/util.rb +1 -0
- data/lib/monday/version.rb +1 -1
- metadata +87 -4
|
@@ -0,0 +1,630 @@
|
|
|
1
|
+
# Rate Limiting Best Practices
|
|
2
|
+
|
|
3
|
+
Rate limiting is a fundamental aspect of working with any API. Understanding why rate limits exist, how they work, and how to work effectively within them is crucial for building reliable integrations with monday.com.
|
|
4
|
+
|
|
5
|
+
## Why Rate Limiting Exists
|
|
6
|
+
|
|
7
|
+
Rate limiting serves two primary purposes:
|
|
8
|
+
|
|
9
|
+
### 1. Protecting the API Infrastructure
|
|
10
|
+
|
|
11
|
+
Without rate limits, a single client could overwhelm the API with requests, degrading performance for all users. Rate limits ensure fair distribution of resources and prevent accidental (or intentional) abuse.
|
|
12
|
+
|
|
13
|
+
Think of it like a highway: without speed limits and traffic control, congestion would make the road unusable for everyone. Rate limits are the "traffic control" of APIs.
|
|
14
|
+
|
|
15
|
+
### 2. Encouraging Efficient API Usage
|
|
16
|
+
|
|
17
|
+
Rate limits incentivize developers to write efficient queries. Instead of making 100 requests for individual items, you're encouraged to batch them into a single request. This benefits both you (fewer network round-trips) and the API (fewer requests to process).
|
|
18
|
+
|
|
19
|
+
## Monday.com's Rate Limiting Strategy
|
|
20
|
+
|
|
21
|
+
Unlike many APIs that use simple request-per-second limits, monday.com uses a **complexity budget** system. This is more sophisticated and fair.
|
|
22
|
+
|
|
23
|
+
### Complexity Budget Model
|
|
24
|
+
|
|
25
|
+
Each monday.com account gets a **complexity budget** that regenerates over time:
|
|
26
|
+
- Budget regenerates at a fixed rate (e.g., 100,000 points per minute)
|
|
27
|
+
- Each query consumes points based on its complexity
|
|
28
|
+
- Simple queries cost few points; complex queries cost many
|
|
29
|
+
- When budget depleted, requests are rate limited until budget regenerates
|
|
30
|
+
|
|
31
|
+
**Why complexity-based?** It's fairer. A simple query that fetches one field shouldn't cost the same as a complex query joining multiple resources. Complexity budgets reward efficient queries.
|
|
32
|
+
|
|
33
|
+
### Example:
|
|
34
|
+
|
|
35
|
+
```ruby
|
|
36
|
+
# Low complexity (~10 points)
|
|
37
|
+
client.board.query(
|
|
38
|
+
ids: [12345],
|
|
39
|
+
select: ['id', 'name']
|
|
40
|
+
)
|
|
41
|
+
|
|
42
|
+
# High complexity (~500 points)
|
|
43
|
+
client.board.query(
|
|
44
|
+
ids: [12345],
|
|
45
|
+
select: [
|
|
46
|
+
'id', 'name', 'description',
|
|
47
|
+
{ 'groups' => ['id', 'title', { 'items' => ['id', 'name', 'column_values'] }] }
|
|
48
|
+
]
|
|
49
|
+
)
|
|
50
|
+
```
|
|
51
|
+
|
|
52
|
+
The second query traverses multiple relationships and returns much more data, so it costs more.
|
|
53
|
+
|
|
54
|
+
## Complexity Calculation and Query Cost
|
|
55
|
+
|
|
56
|
+
Understanding query cost helps you stay within your complexity budget.
|
|
57
|
+
|
|
58
|
+
### Factors That Increase Complexity:
|
|
59
|
+
|
|
60
|
+
1. **Nested relationships**: Each level of nesting adds cost
|
|
61
|
+
```ruby
|
|
62
|
+
# Low cost
|
|
63
|
+
select: ['id', 'name']
|
|
64
|
+
|
|
65
|
+
# Medium cost
|
|
66
|
+
select: ['id', 'name', { 'items' => ['id', 'name'] }]
|
|
67
|
+
|
|
68
|
+
# High cost
|
|
69
|
+
select: ['id', { 'groups' => ['id', { 'items' => ['id', { 'column_values' => ['id', 'text'] }] }] }]
|
|
70
|
+
```
|
|
71
|
+
|
|
72
|
+
2. **Number of fields**: More fields = higher cost
|
|
73
|
+
```ruby
|
|
74
|
+
# Lower cost
|
|
75
|
+
select: ['id', 'name']
|
|
76
|
+
|
|
77
|
+
# Higher cost
|
|
78
|
+
select: ['id', 'name', 'description', 'state', 'board_kind', 'permissions', 'created_at', 'updated_at']
|
|
79
|
+
```
|
|
80
|
+
|
|
81
|
+
3. **Number of results**: Fetching 100 items costs more than fetching 10
|
|
82
|
+
```ruby
|
|
83
|
+
# Lower cost
|
|
84
|
+
client.item.query(ids: [123], limit: 10)
|
|
85
|
+
|
|
86
|
+
# Higher cost
|
|
87
|
+
client.item.query(ids: [123], limit: 100)
|
|
88
|
+
```
|
|
89
|
+
|
|
90
|
+
4. **Computed fields**: Fields that require calculation are more expensive
|
|
91
|
+
|
|
92
|
+
### Estimating Query Cost
|
|
93
|
+
|
|
94
|
+
monday.com's GraphQL API includes complexity information in responses. You can log this to understand your queries:
|
|
95
|
+
|
|
96
|
+
```ruby
|
|
97
|
+
response = client.board.query(ids: [board_id])
|
|
98
|
+
complexity = response.dig('account_id', 'complexity') # If available in response
|
|
99
|
+
logger.info("Query complexity: #{complexity}")
|
|
100
|
+
```
|
|
101
|
+
|
|
102
|
+
**Rule of thumb**: Start simple and add fields incrementally. Monitor which queries cause rate limiting and optimize those.
|
|
103
|
+
|
|
104
|
+
## Rate Limiting Strategies: Proactive vs Reactive
|
|
105
|
+
|
|
106
|
+
There are two fundamental approaches to handling rate limits:
|
|
107
|
+
|
|
108
|
+
### Reactive Strategy (Handle Errors)
|
|
109
|
+
|
|
110
|
+
Wait until you hit the rate limit, then back off:
|
|
111
|
+
|
|
112
|
+
```ruby
|
|
113
|
+
def fetch_with_reactive_limiting
|
|
114
|
+
begin
|
|
115
|
+
client.board.query(ids: [board_id])
|
|
116
|
+
rescue Monday::ComplexityError => e
|
|
117
|
+
# Rate limited - wait and retry
|
|
118
|
+
sleep(60)
|
|
119
|
+
retry
|
|
120
|
+
end
|
|
121
|
+
end
|
|
122
|
+
```
|
|
123
|
+
|
|
124
|
+
**Pros:**
|
|
125
|
+
- Simple to implement
|
|
126
|
+
- No complexity tracking needed
|
|
127
|
+
- Maximizes throughput when under limit
|
|
128
|
+
|
|
129
|
+
**Cons:**
|
|
130
|
+
- Requests fail and must be retried
|
|
131
|
+
- Unpredictable latency (sudden delays when limit hit)
|
|
132
|
+
- Can create thundering herd if multiple processes retry simultaneously
|
|
133
|
+
|
|
134
|
+
### Proactive Strategy (Track and Throttle)
|
|
135
|
+
|
|
136
|
+
Track your complexity usage and throttle before hitting the limit:
|
|
137
|
+
|
|
138
|
+
```ruby
|
|
139
|
+
class ComplexityTracker
|
|
140
|
+
def initialize(budget_per_minute: 100_000)
|
|
141
|
+
@budget = budget_per_minute
|
|
142
|
+
@used = 0
|
|
143
|
+
@window_start = Time.now
|
|
144
|
+
end
|
|
145
|
+
|
|
146
|
+
def track_request(estimated_cost)
|
|
147
|
+
reset_if_new_window
|
|
148
|
+
|
|
149
|
+
if @used + estimated_cost > @budget
|
|
150
|
+
wait_time = 60 - (Time.now - @window_start)
|
|
151
|
+
sleep(wait_time) if wait_time > 0
|
|
152
|
+
reset_window
|
|
153
|
+
end
|
|
154
|
+
|
|
155
|
+
@used += estimated_cost
|
|
156
|
+
end
|
|
157
|
+
|
|
158
|
+
private
|
|
159
|
+
|
|
160
|
+
def reset_if_new_window
|
|
161
|
+
if Time.now - @window_start >= 60
|
|
162
|
+
reset_window
|
|
163
|
+
end
|
|
164
|
+
end
|
|
165
|
+
|
|
166
|
+
def reset_window
|
|
167
|
+
@used = 0
|
|
168
|
+
@window_start = Time.now
|
|
169
|
+
end
|
|
170
|
+
end
|
|
171
|
+
```
|
|
172
|
+
|
|
173
|
+
**Pros:**
|
|
174
|
+
- Predictable latency (no sudden rate limit errors)
|
|
175
|
+
- Better for user experience (no failed requests)
|
|
176
|
+
- More efficient (no wasted retry attempts)
|
|
177
|
+
|
|
178
|
+
**Cons:**
|
|
179
|
+
- Complex to implement
|
|
180
|
+
- Requires tracking state
|
|
181
|
+
- May be overly conservative (wasting budget)
|
|
182
|
+
|
|
183
|
+
### Hybrid Strategy (Best of Both)
|
|
184
|
+
|
|
185
|
+
Use proactive throttling with reactive fallback:
|
|
186
|
+
|
|
187
|
+
```ruby
|
|
188
|
+
def fetch_with_hybrid_limiting
|
|
189
|
+
tracker.track_request(estimated_complexity: 100)
|
|
190
|
+
|
|
191
|
+
begin
|
|
192
|
+
client.board.query(ids: [board_id])
|
|
193
|
+
rescue Monday::ComplexityError => e
|
|
194
|
+
# Reactive fallback if estimation was wrong
|
|
195
|
+
logger.warn("Hit rate limit despite throttling")
|
|
196
|
+
sleep(60)
|
|
197
|
+
retry
|
|
198
|
+
end
|
|
199
|
+
end
|
|
200
|
+
```
|
|
201
|
+
|
|
202
|
+
This combines the predictability of proactive throttling with the safety net of reactive handling.
|
|
203
|
+
|
|
204
|
+
## Exponential Backoff: Why It Works
|
|
205
|
+
|
|
206
|
+
When you do hit a rate limit, exponential backoff is the gold standard retry strategy.
|
|
207
|
+
|
|
208
|
+
### Linear Backoff (Don't Use)
|
|
209
|
+
|
|
210
|
+
```ruby
|
|
211
|
+
# Bad: Linear backoff
|
|
212
|
+
retry_count.times do |i|
|
|
213
|
+
sleep((i + 1) * 5) # 5s, 10s, 15s, 20s
|
|
214
|
+
retry
|
|
215
|
+
end
|
|
216
|
+
```
|
|
217
|
+
|
|
218
|
+
**Problem**: If many clients are rate-limited simultaneously (common during outages), they all retry at similar intervals, creating synchronized "waves" of requests that re-trigger the rate limit.
|
|
219
|
+
|
|
220
|
+
### Exponential Backoff (Use This)
|
|
221
|
+
|
|
222
|
+
```ruby
|
|
223
|
+
# Good: Exponential backoff
|
|
224
|
+
retry_count.times do |i|
|
|
225
|
+
sleep(2 ** i) # 1s, 2s, 4s, 8s, 16s, 32s
|
|
226
|
+
retry
|
|
227
|
+
end
|
|
228
|
+
```
|
|
229
|
+
|
|
230
|
+
**Why it works**:
|
|
231
|
+
1. **Backs off quickly**: Gives the API (and your budget) time to recover
|
|
232
|
+
2. **Disperses retries**: Different processes retry at different times
|
|
233
|
+
3. **Self-limiting**: Long delays naturally limit retry attempts
|
|
234
|
+
|
|
235
|
+
### Adding Jitter (Even Better)
|
|
236
|
+
|
|
237
|
+
```ruby
|
|
238
|
+
# Best: Exponential backoff with jitter
|
|
239
|
+
retry_count.times do |i|
|
|
240
|
+
base_delay = 2 ** i
|
|
241
|
+
jitter = rand(0..base_delay * 0.1) # Add 0-10% randomness
|
|
242
|
+
sleep(base_delay + jitter)
|
|
243
|
+
retry
|
|
244
|
+
end
|
|
245
|
+
```
|
|
246
|
+
|
|
247
|
+
**Jitter** adds randomness to prevent synchronized retries. If 100 clients are rate-limited at the same moment, jitter ensures they don't all retry at exactly the same time.
|
|
248
|
+
|
|
249
|
+
## Queuing Requests: Benefits and Trade-offs
|
|
250
|
+
|
|
251
|
+
For high-volume integrations, queuing requests can smooth out traffic and prevent rate limiting.
|
|
252
|
+
|
|
253
|
+
### Basic Queue Pattern
|
|
254
|
+
|
|
255
|
+
```ruby
|
|
256
|
+
class MondayRequestQueue
|
|
257
|
+
def initialize(requests_per_minute: 60)
|
|
258
|
+
@queue = Queue.new
|
|
259
|
+
@rate = requests_per_minute
|
|
260
|
+
start_worker
|
|
261
|
+
end
|
|
262
|
+
|
|
263
|
+
def enqueue(request)
|
|
264
|
+
@queue.push(request)
|
|
265
|
+
end
|
|
266
|
+
|
|
267
|
+
private
|
|
268
|
+
|
|
269
|
+
def start_worker
|
|
270
|
+
Thread.new do
|
|
271
|
+
loop do
|
|
272
|
+
request = @queue.pop
|
|
273
|
+
execute_request(request)
|
|
274
|
+
sleep(60.0 / @rate) # Throttle to stay under limit
|
|
275
|
+
end
|
|
276
|
+
end
|
|
277
|
+
end
|
|
278
|
+
|
|
279
|
+
def execute_request(request)
|
|
280
|
+
request.call
|
|
281
|
+
rescue Monday::Error => e
|
|
282
|
+
handle_error(e, request)
|
|
283
|
+
end
|
|
284
|
+
end
|
|
285
|
+
```
|
|
286
|
+
|
|
287
|
+
### Benefits:
|
|
288
|
+
|
|
289
|
+
1. **Smooth traffic**: Requests sent at steady rate, not bursts
|
|
290
|
+
2. **Automatic throttling**: Queue ensures you never exceed rate limit
|
|
291
|
+
3. **Resilience**: Failed requests can be re-queued
|
|
292
|
+
4. **Prioritization**: Implement priority queues for urgent requests
|
|
293
|
+
|
|
294
|
+
### Trade-offs:
|
|
295
|
+
|
|
296
|
+
1. **Added latency**: Requests wait in queue before execution
|
|
297
|
+
2. **Complexity**: Requires queue management, monitoring, error handling
|
|
298
|
+
3. **Memory usage**: Large queues consume memory
|
|
299
|
+
4. **Lost requests**: Queue contents lost if process crashes (use persistent queue like Sidekiq/Redis)
|
|
300
|
+
|
|
301
|
+
### When to Use:
|
|
302
|
+
|
|
303
|
+
- **High volume**: Processing hundreds/thousands of requests per hour
|
|
304
|
+
- **Background jobs**: Non-interactive operations where latency is acceptable
|
|
305
|
+
- **Batch operations**: Syncing large datasets
|
|
306
|
+
|
|
307
|
+
### When Not to Use:
|
|
308
|
+
|
|
309
|
+
- **Interactive requests**: Users waiting for immediate responses
|
|
310
|
+
- **Low volume**: Simple applications with occasional API calls
|
|
311
|
+
- **Real-time needs**: Time-sensitive operations where queuing delay is unacceptable
|
|
312
|
+
|
|
313
|
+
## Caching Responses: When It Helps
|
|
314
|
+
|
|
315
|
+
Caching API responses can dramatically reduce your rate limit consumption.
|
|
316
|
+
|
|
317
|
+
### What to Cache
|
|
318
|
+
|
|
319
|
+
**Good candidates:**
|
|
320
|
+
- Reference data (board schemas, column definitions)
|
|
321
|
+
- Slow-changing data (board names, user lists)
|
|
322
|
+
- Frequently accessed data (current user info)
|
|
323
|
+
|
|
324
|
+
**Poor candidates:**
|
|
325
|
+
- Real-time data (item status updates)
|
|
326
|
+
- User-specific data (for multi-user apps)
|
|
327
|
+
- Large datasets (memory constraints)
|
|
328
|
+
|
|
329
|
+
### Cache Implementation
|
|
330
|
+
|
|
331
|
+
```ruby
|
|
332
|
+
def get_board_schema(board_id)
|
|
333
|
+
cache_key = "monday_board_schema_#{board_id}"
|
|
334
|
+
|
|
335
|
+
Rails.cache.fetch(cache_key, expires_in: 1.hour) do
|
|
336
|
+
client.board.query(
|
|
337
|
+
ids: [board_id],
|
|
338
|
+
select: ['id', 'name', { 'columns' => ['id', 'title', 'type'] }]
|
|
339
|
+
)
|
|
340
|
+
end
|
|
341
|
+
end
|
|
342
|
+
```
|
|
343
|
+
|
|
344
|
+
### TTL Considerations
|
|
345
|
+
|
|
346
|
+
Choosing the right Time-To-Live (TTL) is an art:
|
|
347
|
+
|
|
348
|
+
**Short TTL (minutes):**
|
|
349
|
+
- Use for: Moderately dynamic data (item counts, recent updates)
|
|
350
|
+
- Pro: More accurate data
|
|
351
|
+
- Con: More API calls, less rate limit savings
|
|
352
|
+
|
|
353
|
+
**Medium TTL (hours):**
|
|
354
|
+
- Use for: Slowly changing data (board configuration, user lists)
|
|
355
|
+
- Pro: Balance between freshness and efficiency
|
|
356
|
+
- Con: Data can be stale for parts of the day
|
|
357
|
+
|
|
358
|
+
**Long TTL (days):**
|
|
359
|
+
- Use for: Static reference data (workspace structure, column types)
|
|
360
|
+
- Pro: Maximum rate limit savings
|
|
361
|
+
- Con: Stale data if structure changes
|
|
362
|
+
|
|
363
|
+
**Indefinite (manual invalidation):**
|
|
364
|
+
- Use for: Truly static data
|
|
365
|
+
- Pro: Zero API calls for cached data
|
|
366
|
+
- Con: Must invalidate on changes (complex)
|
|
367
|
+
|
|
368
|
+
### Cache Invalidation
|
|
369
|
+
|
|
370
|
+
The hard part of caching is knowing when to invalidate:
|
|
371
|
+
|
|
372
|
+
```ruby
|
|
373
|
+
def update_board(board_id, attributes)
|
|
374
|
+
result = client.board.update(board_id: board_id, **attributes)
|
|
375
|
+
|
|
376
|
+
# Invalidate cache after update
|
|
377
|
+
Rails.cache.delete("monday_board_schema_#{board_id}")
|
|
378
|
+
Rails.cache.delete("monday_board_items_#{board_id}")
|
|
379
|
+
|
|
380
|
+
result
|
|
381
|
+
end
|
|
382
|
+
```
|
|
383
|
+
|
|
384
|
+
**Cache invalidation strategies:**
|
|
385
|
+
|
|
386
|
+
1. **Time-based (TTL)**: Simplest, works for most use cases
|
|
387
|
+
2. **Event-based**: Invalidate when data changes (requires tracking)
|
|
388
|
+
3. **Versioned keys**: Include version in cache key, bump on change
|
|
389
|
+
4. **Background refresh**: Refresh cache before expiry (always fresh, no cache misses)
|
|
390
|
+
|
|
391
|
+
## Optimizing Query Complexity
|
|
392
|
+
|
|
393
|
+
The best way to avoid rate limiting is to reduce query complexity.
|
|
394
|
+
|
|
395
|
+
### Technique 1: Request Only Needed Fields
|
|
396
|
+
|
|
397
|
+
```ruby
|
|
398
|
+
# Bad: Fetching everything (high complexity)
|
|
399
|
+
client.board.query(
|
|
400
|
+
ids: [board_id],
|
|
401
|
+
select: ['id', 'name', 'description', 'state', 'board_kind',
|
|
402
|
+
'permissions', { 'groups' => ['id', 'title', { 'items' => ['id', 'name'] }] }]
|
|
403
|
+
)
|
|
404
|
+
|
|
405
|
+
# Good: Fetching only what's needed (low complexity)
|
|
406
|
+
client.board.query(
|
|
407
|
+
ids: [board_id],
|
|
408
|
+
select: ['id', 'name', { 'groups' => ['id'] }]
|
|
409
|
+
)
|
|
410
|
+
```
|
|
411
|
+
|
|
412
|
+
**Every field has a cost**. Only request fields you actually use.
|
|
413
|
+
|
|
414
|
+
### Technique 2: Pagination Over Large Queries
|
|
415
|
+
|
|
416
|
+
```ruby
|
|
417
|
+
# Bad: Fetch all items at once (very high complexity)
|
|
418
|
+
client.item.query_by_board(
|
|
419
|
+
board_id: board_id,
|
|
420
|
+
limit: 1000
|
|
421
|
+
)
|
|
422
|
+
|
|
423
|
+
# Good: Paginate in smaller chunks (distributed complexity)
|
|
424
|
+
page = 1
|
|
425
|
+
loop do
|
|
426
|
+
items = client.item.query_by_board(
|
|
427
|
+
board_id: board_id,
|
|
428
|
+
limit: 25,
|
|
429
|
+
page: page
|
|
430
|
+
)
|
|
431
|
+
|
|
432
|
+
break if items.empty?
|
|
433
|
+
process_items(items)
|
|
434
|
+
page += 1
|
|
435
|
+
sleep(0.5) # Small delay between pages
|
|
436
|
+
end
|
|
437
|
+
```
|
|
438
|
+
|
|
439
|
+
Pagination spreads complexity over time, staying within your budget.
|
|
440
|
+
|
|
441
|
+
### Technique 3: Batch Related Requests
|
|
442
|
+
|
|
443
|
+
```ruby
|
|
444
|
+
# Bad: Multiple queries (high total complexity)
|
|
445
|
+
boards.each do |board_id|
|
|
446
|
+
client.board.query(ids: [board_id])
|
|
447
|
+
end
|
|
448
|
+
|
|
449
|
+
# Good: Single batched query (lower total complexity)
|
|
450
|
+
client.board.query(ids: board_ids)
|
|
451
|
+
```
|
|
452
|
+
|
|
453
|
+
monday.com's API supports fetching multiple resources in one query. Use it.
|
|
454
|
+
|
|
455
|
+
### Technique 4: Denormalize When Possible
|
|
456
|
+
|
|
457
|
+
If you frequently need the same data, consider storing it locally:
|
|
458
|
+
|
|
459
|
+
```ruby
|
|
460
|
+
# Instead of querying monday.com every time
|
|
461
|
+
def get_item_status(item_id)
|
|
462
|
+
client.item.query(ids: [item_id], select: ['status'])
|
|
463
|
+
end
|
|
464
|
+
|
|
465
|
+
# Store status locally and sync periodically
|
|
466
|
+
class Item < ApplicationRecord
|
|
467
|
+
def self.sync_statuses
|
|
468
|
+
items = client.item.query(ids: Item.pluck(:monday_id), select: ['id', 'status'])
|
|
469
|
+
items.each do |item_data|
|
|
470
|
+
Item.find_by(monday_id: item_data['id']).update(status: item_data['status'])
|
|
471
|
+
end
|
|
472
|
+
end
|
|
473
|
+
end
|
|
474
|
+
```
|
|
475
|
+
|
|
476
|
+
**Trade-off**: Data staleness vs. API efficiency. Choose based on your freshness requirements.
|
|
477
|
+
|
|
478
|
+
## Monitoring Rate Limit Usage
|
|
479
|
+
|
|
480
|
+
You can't optimize what you don't measure.
|
|
481
|
+
|
|
482
|
+
### What to Monitor
|
|
483
|
+
|
|
484
|
+
1. **Rate limit errors**: How often are you hitting the limit?
|
|
485
|
+
2. **Complexity per query**: Which queries are most expensive?
|
|
486
|
+
3. **Total complexity**: How much of your budget are you using?
|
|
487
|
+
4. **Retry frequency**: How many retries are needed?
|
|
488
|
+
|
|
489
|
+
### Monitoring Implementation
|
|
490
|
+
|
|
491
|
+
```ruby
|
|
492
|
+
class MondayApiMonitor
|
|
493
|
+
def self.track_request(query, complexity, duration)
|
|
494
|
+
StatsD.increment('monday.api.requests')
|
|
495
|
+
StatsD.gauge('monday.api.complexity', complexity)
|
|
496
|
+
StatsD.timing('monday.api.duration', duration)
|
|
497
|
+
end
|
|
498
|
+
|
|
499
|
+
def self.track_rate_limit_error
|
|
500
|
+
StatsD.increment('monday.api.rate_limit_errors')
|
|
501
|
+
alert_if_threshold_exceeded
|
|
502
|
+
end
|
|
503
|
+
|
|
504
|
+
private
|
|
505
|
+
|
|
506
|
+
def self.alert_if_threshold_exceeded
|
|
507
|
+
error_rate = get_error_rate
|
|
508
|
+
if error_rate > 0.05 # Alert if >5% of requests rate limited
|
|
509
|
+
notify_team("Monday API rate limit errors elevated: #{error_rate}")
|
|
510
|
+
end
|
|
511
|
+
end
|
|
512
|
+
end
|
|
513
|
+
```
|
|
514
|
+
|
|
515
|
+
### Setting Alerts
|
|
516
|
+
|
|
517
|
+
Configure alerts for:
|
|
518
|
+
- **High error rate**: >5% of requests rate limited
|
|
519
|
+
- **Approaching budget**: Using >80% of complexity budget
|
|
520
|
+
- **Sudden spikes**: Complexity usage increases >50% hour-over-hour
|
|
521
|
+
|
|
522
|
+
Early warning allows you to optimize before users are impacted.
|
|
523
|
+
|
|
524
|
+
## Distributed Rate Limiting Challenges
|
|
525
|
+
|
|
526
|
+
In distributed systems (multiple servers/processes), rate limiting becomes complex.
|
|
527
|
+
|
|
528
|
+
### The Problem
|
|
529
|
+
|
|
530
|
+
Each process doesn't know what the others are doing:
|
|
531
|
+
|
|
532
|
+
```ruby
|
|
533
|
+
# Process 1
|
|
534
|
+
client.board.query(...) # Uses 1000 complexity points
|
|
535
|
+
|
|
536
|
+
# Process 2 (simultaneously)
|
|
537
|
+
client.board.query(...) # Also uses 1000 complexity points
|
|
538
|
+
|
|
539
|
+
# Combined: 2000 points consumed, but neither process knows
|
|
540
|
+
```
|
|
541
|
+
|
|
542
|
+
If each process thinks it has the full budget, they'll collectively exceed the limit.
|
|
543
|
+
|
|
544
|
+
### Solution 1: Centralized Rate Limiter
|
|
545
|
+
|
|
546
|
+
Use Redis to track shared complexity budget:
|
|
547
|
+
|
|
548
|
+
```ruby
|
|
549
|
+
class DistributedRateLimiter
|
|
550
|
+
def initialize(redis, budget_per_minute: 100_000)
|
|
551
|
+
@redis = redis
|
|
552
|
+
@budget = budget_per_minute
|
|
553
|
+
end
|
|
554
|
+
|
|
555
|
+
def acquire(cost)
|
|
556
|
+
key = "monday_complexity:#{Time.now.to_i / 60}" # Per-minute key
|
|
557
|
+
|
|
558
|
+
@redis.watch(key)
|
|
559
|
+
used = @redis.get(key).to_i
|
|
560
|
+
|
|
561
|
+
if used + cost > @budget
|
|
562
|
+
@redis.unwatch
|
|
563
|
+
return false # Budget exceeded
|
|
564
|
+
end
|
|
565
|
+
|
|
566
|
+
@redis.multi do
|
|
567
|
+
@redis.incrby(key, cost)
|
|
568
|
+
@redis.expire(key, 120) # Expire after 2 minutes
|
|
569
|
+
end
|
|
570
|
+
|
|
571
|
+
true
|
|
572
|
+
end
|
|
573
|
+
end
|
|
574
|
+
|
|
575
|
+
# Usage
|
|
576
|
+
unless rate_limiter.acquire(estimated_cost)
|
|
577
|
+
sleep(60) # Wait for next window
|
|
578
|
+
retry
|
|
579
|
+
end
|
|
580
|
+
|
|
581
|
+
client.board.query(...)
|
|
582
|
+
```
|
|
583
|
+
|
|
584
|
+
### Solution 2: Partition Budget
|
|
585
|
+
|
|
586
|
+
Divide complexity budget among processes:
|
|
587
|
+
|
|
588
|
+
```ruby
|
|
589
|
+
# If you have 4 worker processes
|
|
590
|
+
process_budget = TOTAL_BUDGET / 4
|
|
591
|
+
|
|
592
|
+
# Each process tracks its own portion
|
|
593
|
+
tracker = ComplexityTracker.new(budget_per_minute: process_budget)
|
|
594
|
+
```
|
|
595
|
+
|
|
596
|
+
**Trade-off**: May underutilize budget if some processes are idle while others are busy.
|
|
597
|
+
|
|
598
|
+
### Solution 3: Queue-Based (Recommended)
|
|
599
|
+
|
|
600
|
+
Use a centralized queue (Sidekiq, etc.) with a single worker:
|
|
601
|
+
|
|
602
|
+
```ruby
|
|
603
|
+
# All processes enqueue requests
|
|
604
|
+
MondayRequestJob.perform_async(board_id: board_id)
|
|
605
|
+
|
|
606
|
+
# Single worker processes queue at controlled rate
|
|
607
|
+
class MondayRequestJob
|
|
608
|
+
include Sidekiq::Job
|
|
609
|
+
sidekiq_options throttle: { threshold: 60, period: 1.minute }
|
|
610
|
+
|
|
611
|
+
def perform(board_id:)
|
|
612
|
+
client.board.query(ids: [board_id])
|
|
613
|
+
end
|
|
614
|
+
end
|
|
615
|
+
```
|
|
616
|
+
|
|
617
|
+
This naturally serializes requests and prevents distributed rate limiting issues.
|
|
618
|
+
|
|
619
|
+
## Key Takeaways
|
|
620
|
+
|
|
621
|
+
1. **Understand complexity budgets**: monday.com uses complexity, not simple request counts
|
|
622
|
+
2. **Be proactive**: Track usage and throttle before hitting limits
|
|
623
|
+
3. **Use exponential backoff**: When rate limited, back off exponentially with jitter
|
|
624
|
+
4. **Cache strategically**: Cache slow-changing data with appropriate TTLs
|
|
625
|
+
5. **Optimize queries**: Request only needed fields, paginate large datasets
|
|
626
|
+
6. **Monitor actively**: Track complexity usage, error rates, and set alerts
|
|
627
|
+
7. **Queue for scale**: Use queues for high-volume, distributed systems
|
|
628
|
+
8. **Test your limits**: Understand your actual complexity budget through monitoring
|
|
629
|
+
|
|
630
|
+
Rate limiting isn't a obstacle—it's a design constraint that encourages efficient, scalable API usage. Work with it, not against it.
|