memory-profiler 1.1.2 → 1.1.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 06ea6c4892f9d17ac8fa69ba374d2a6a94ab8b809be840b26cb544b18b529edc
4
- data.tar.gz: a1c283a60721f988cac65878551eba4472a4dd889649d42b1409ba9a7b8b6756
3
+ metadata.gz: fdf7841a9d0249712c9bd140447e0063b5b543cae942bc1dedc09682472f3aaf
4
+ data.tar.gz: 14f9521f93447843b93aff1dfe15b12fce5f9491bb6a6055caf1d90cce304f17
5
5
  SHA512:
6
- metadata.gz: 22a34ac3cd4ad0c0eead78deef38cd79f20f6c96338f0af5eff191a3d2f2d99ea7dc44a40993838ead2678cb52a257fbed2548e08248e5e389277801c5895cae
7
- data.tar.gz: 49d267900c65e5481b6d784766e17721ba9919b984dcaf84737022622f4bbb9379773a0c98166a5d2b767192c787dd9fd13a8c43e98db3abd8d3dcb86aec74c5
6
+ metadata.gz: 43ba6c482b4f9e6e80e5f3213435b158b954087790e19ac4effb77fe4d06d602027c6668f63aa0b68bd592c073ac112f98735da4249d68829913d1328bb21cb4
7
+ data.tar.gz: b30e212a1d6a315a4fc94d9dfad066d0f1e2ec4f73dbf7f5bf757bd85e6279ddb088e87168cb6cc158b99fcb218df5ec6f3e7ca60254c8a071478959cbda4817
checksums.yaml.gz.sig CHANGED
Binary file
@@ -1,6 +1,6 @@
1
1
  # Getting Started
2
2
 
3
- This guide explains how to use `memory-profiler` to detect and diagnose memory leaks in Ruby applications.
3
+ This guide explains how to use `memory-profiler` to automatically detect and diagnose memory leaks in Ruby applications.
4
4
 
5
5
  ## Installation
6
6
 
@@ -12,218 +12,127 @@ $ bundle add memory-profiler
12
12
 
13
13
  ## Core Concepts
14
14
 
15
- Memory leaks happen when your application creates objects that should be garbage collected but remain referenced indefinitely. Over time, this causes memory usage to grow unbounded, eventually leading to performance degradation or out-of-memory crashes.
15
+ Memory leaks happen when your application creates objects that should be garbage collected but remain referenced indefinitely. Over time, this causes unbounded memory growth, leading to performance degradation or crashes.
16
16
 
17
- `memory-profiler` helps you find memory leaks by tracking object allocations in real-time:
18
-
19
- - **{ruby Memory::Profiler::Capture}** monitors allocations using Ruby's internal NEWOBJ/FREEOBJ events.
20
- - **{ruby Memory::Profiler::CallTree}** aggregates allocation call paths to identify leak sources.
17
+ - {ruby Memory::Profiler::Capture} monitors allocations using Ruby's internal NEWOBJ/FREEOBJ events.
18
+ - {ruby Memory::Profiler::CallTree} aggregates allocation call paths to identify leak sources.
21
19
  - **No heap enumeration** - uses O(1) counters updated automatically by the VM.
22
20
 
23
- ## Usage
24
-
25
- ### Monitor Memory Growth
21
+ ## Basic Usage
26
22
 
27
- Start by identifying which classes are accumulating objects:
23
+ The simplest way to detect memory leaks is to run the automatic sampler:
28
24
 
29
25
  ~~~ ruby
30
26
  require 'memory/profiler'
31
27
 
32
- # Create a capture instance:
33
- capture = Memory::Profiler::Capture.new
28
+ # Create a sampler that monitors all allocations:
29
+ sampler = Memory::Profiler::Sampler.new(
30
+ # Call stack depth for analysis:
31
+ depth: 10,
34
32
 
35
- # Start tracking all object allocations:
36
- capture.start
33
+ # Enable detailed tracking after 10 increases:
34
+ increases_threshold: 10
35
+ )
37
36
 
38
- # Run your application code...
39
- run_your_app
37
+ sampler.start
40
38
 
41
- # Check live object counts for common classes:
42
- puts "Hashes: #{capture.count_for(Hash)}"
43
- puts "Arrays: #{capture.count_for(Array)}"
44
- puts "Strings: #{capture.count_for(String)}"
39
+ # Run periodic sampling in a background thread:
40
+ Thread.new do
41
+ sampler.run(interval: 60) do |sample|
42
+ puts "⚠️ #{sample.target} growing: #{sample.current_size} objects (#{sample.increases} increases)"
43
+
44
+ # After 10 increases, detailed statistics are automatically available:
45
+ if sample.increases >= 10
46
+ statistics = sampler.statistics(sample.target)
47
+ puts "Top leak sources:"
48
+ statistics[:top_paths].each do |path_data|
49
+ puts " #{path_data[:count]}x from: #{path_data[:path].first}"
50
+ end
51
+ end
52
+ end
53
+ end
45
54
 
46
- capture.stop
55
+ # Your application runs here...
56
+ objects = []
57
+ while true
58
+ # Simulate a memory leak:
59
+ objects << Hash.new
60
+ sleep 0.1
61
+ end
47
62
  ~~~
48
63
 
49
- **What this tells you**: Which object types are growing over time. If Hash count keeps increasing across multiple samples, you likely have a Hash leak.
64
+ **What happens:**
65
+ 1. Sampler automatically tracks every class that allocates objects.
66
+ 2. Every 60 seconds, checks if any class grew significantly (>1000 objects).
67
+ 3. Reports growth via the block you provide.
68
+ 4. After 10 sustained increases, automatically captures call paths.
69
+ 5. You can then query `statistics(klass)` to find leak sources.
50
70
 
51
- ### Find the Leak Source
71
+ ## Manual Investigation
52
72
 
53
- Once you've identified a leaking class, use call path analysis to find WHERE allocations come from:
73
+ If you already know which class is leaking, you can investigate immediately:
54
74
 
55
75
  ~~~ ruby
56
- # Create a sampler with call path analysis:
57
- sampler = Memory::Profiler::Sampler.new(depth: 10)
76
+ sampler = Memory::Profiler::Sampler.new(depth: 15)
77
+ sampler.start
58
78
 
59
- # Track the leaking class with analysis:
79
+ # Enable detailed tracking for specific class:
60
80
  sampler.track_with_analysis(Hash)
61
- sampler.start
62
81
 
63
82
  # Run code that triggers the leak:
64
- simulate_leak
83
+ 1000.times { process_request }
65
84
 
66
- # Analyze where allocations come from:
85
+ # Analyze:
67
86
  statistics = sampler.statistics(Hash)
68
87
 
69
- puts "Live objects: #{statistics[:live_count]}"
88
+ puts "Live Hashes: #{statistics[:live_count]}"
70
89
  puts "\nTop allocation sources:"
71
90
  statistics[:top_paths].first(5).each do |path_data|
72
- puts "\n#{path_data[:count]} allocations from:"
73
- path_data[:path].each { |frame| puts " #{frame}" }
91
+ puts "\n#{path_data[:count]} allocations from:"
92
+ path_data[:path].each { |frame| puts " #{frame}" }
74
93
  end
75
94
 
76
- sampler.stop
77
- ~~~
78
-
79
- **What this shows**: The complete call stacks that led to Hash allocations. Look for unexpected paths or paths that appear repeatedly.
80
-
81
- ## Real-World Example
82
-
83
- Let's say you notice your app's memory growing over time. Here's how to diagnose it:
84
-
85
- ~~~ ruby
86
- require 'memory/profiler'
87
-
88
- # Setup monitoring:
89
- capture = Memory::Profiler::Capture.new
90
- capture.start
91
-
92
- # Take baseline measurement:
93
- GC.start # Clean up old objects first
94
- baseline = {
95
- hashes: capture.count_for(Hash),
96
- arrays: capture.count_for(Array),
97
- strings: capture.count_for(String)
98
- }
99
-
100
- # Run your application for a period:
101
- # In production: sample periodically (every 60 seconds)
102
- # In development: run through typical workflows
103
- sleep 60
104
-
105
- # Check what grew:
106
- current = {
107
- hashes: capture.count_for(Hash),
108
- arrays: capture.count_for(Array),
109
- strings: capture.count_for(String)
110
- }
111
-
112
- # Report growth:
113
- current.each do |type, count|
114
- growth = count - baseline[type]
115
- if growth > 100
116
- puts "⚠️ #{type} grew by #{growth} objects"
117
- end
95
+ puts "\nHotspot frames:"
96
+ statistics[:hotspots].first(5).each do |location, count|
97
+ puts " #{location}: #{count}"
118
98
  end
119
99
 
120
- capture.stop
100
+ sampler.stop!
121
101
  ~~~
122
102
 
123
- If Hashes grew significantly, enable detailed tracking:
124
-
125
- ~~~ ruby
126
- # Create detailed sampler:
127
- sampler = Memory::Profiler::Sampler.new(depth: 15)
128
- sampler.track_with_analysis(Hash)
129
- sampler.start
130
-
131
- # Run suspicious code path:
132
- process_user_requests(1000)
133
-
134
- # Find the culprits:
135
- statistics = sampler.statistics(Hash)
136
- statistics[:top_paths].first(3).each_with_index do |path_data, i|
137
- puts "\n#{i+1}. #{path_data[:count]} Hash allocations:"
138
- path_data[:path].first(5).each { |frame| puts " #{frame}" }
139
- end
140
-
141
- sampler.stop
142
- ~~~
143
-
144
- ## Best Practices
145
-
146
- ### When Tracking in Production
147
-
148
- 1. **Start tracking AFTER startup**: Call `GC.start` before `capture.start` to avoid counting initialization objects
149
- 2. **Use count-only mode for monitoring**: `capture.track(Hash)` (no callback) has minimal overhead
150
- 3. **Enable analysis only when investigating**: Call path analysis has higher overhead
151
- 4. **Sample periodically**: Take measurements every 60 seconds rather than continuously
152
- 5. **Stop when done**: Always call `stop()` to remove event hooks
153
-
154
- ### Performance Considerations
155
-
156
- **Count-only tracking** (no callback):
157
- - Minimal overhead (~5-10% on allocation hotpath)
158
- - Safe for production monitoring
159
- - Tracks all classes automatically
160
-
161
- **Call path analysis** (with callback):
162
- - Higher overhead (captures `caller_locations` on every allocation)
163
- - Use during investigation, not continuous monitoring
164
- - Only track specific classes you're investigating
165
-
166
- ### Avoiding False Positives
167
-
168
- Objects allocated before tracking starts but freed after will show as negative or zero:
169
-
170
- ~~~ ruby
171
- # ❌ Wrong - counts existing objects:
172
- capture.start
173
- 100.times { {} }
174
- GC.start # Frees old + new objects → underflow
175
-
176
- # ✅ Right - clean slate first:
177
- GC.start # Clear old objects
178
- capture.start
179
- 100.times { {} }
180
- ~~~
181
-
182
- ## Common Scenarios
183
-
184
- ### Detecting Cache Leaks
185
-
186
- ~~~ ruby
187
- # Monitor your cache class:
188
- capture = Memory::Profiler::Capture.new
189
- capture.start
190
-
191
- cache_baseline = capture.count_for(CacheEntry)
192
-
193
- # Run for a period:
194
- sleep 300 # 5 minutes
195
-
196
- cache_current = capture.count_for(CacheEntry)
197
-
198
- if cache_current > cache_baseline * 2
199
- puts "⚠️ Cache is leaking! #{cache_current - cache_baseline} entries added"
200
- # Enable detailed tracking to find the source
201
- end
202
- ~~~
203
-
204
- ### Finding Retention in Request Processing
205
-
206
- ~~~ ruby
207
- # Track during request processing:
208
- sampler = Memory::Profiler::Sampler.new
209
- sampler.track_with_analysis(Hash)
210
- sampler.start
211
-
212
- # Process requests:
213
- 1000.times do
214
- process_request
215
- end
216
-
217
- # Check if Hashes are being retained:
218
- statistics = sampler.statistics(Hash)
219
-
220
- if statistics[:live_count] > 1000
221
- puts "Leaking #{statistics[:live_count]} Hashes per 1000 requests!"
222
- statistics[:top_paths].first(3).each do |path_data|
223
- puts "\n#{path_data[:count]}x from:"
224
- puts path_data[:path].join("\n ")
225
- end
226
- end
227
-
228
- sampler.stop
229
- ~~~
103
+ ## Understanding the Output
104
+
105
+ **Sample data** (from growth detection):
106
+ - `target`: The class showing growth
107
+ - `current_size`: Current live object count
108
+ - `increases`: Number of sustained growth events (>1000 objects each)
109
+ - `threshold`: Minimum growth to trigger an increase
110
+
111
+ **Statistics** (after detailed tracking enabled):
112
+ - `live_count`: Current retained objects
113
+ - `top_paths`: Complete call stacks ranked by allocation frequency
114
+ - `hotspots`: Individual frames aggregated across all paths
115
+
116
+ **Top paths** show WHERE objects are created:
117
+ ```
118
+ 50 allocations from:
119
+ app/services/processor.rb:45:in 'process_item'
120
+ app/workers/job.rb:23:in 'perform'
121
+ ```
122
+
123
+ **Hotspots** show which lines appear most across all paths:
124
+ ```
125
+ app/services/processor.rb:45: 150 ← This line in many different call stacks
126
+ ```
127
+
128
+ ## Performance Considerations
129
+
130
+ **Automatic mode** (recommended for production):
131
+ - Minimal overhead initially (just counting).
132
+ - Detailed tracking only enabled when leaks detected.
133
+ - 60-second sampling interval is non-intrusive.
134
+
135
+ **Manual tracking** (for investigation):
136
+ - Higher overhead (captures `caller_locations` on every allocation).
137
+ - Use during debugging, not continuous monitoring.
138
+ - Only track specific classes you're investigating.
data/context/index.yaml CHANGED
@@ -8,5 +8,9 @@ metadata:
8
8
  files:
9
9
  - path: getting-started.md
10
10
  title: Getting Started
11
- description: This guide explains how to use `memory-profiler` to detect and diagnose
12
- memory leaks in Ruby applications.
11
+ description: This guide explains how to use `memory-profiler` to automatically detect
12
+ and diagnose memory leaks in Ruby applications.
13
+ - path: rack-integration.md
14
+ title: Rack Integration
15
+ description: This guide explains how to integrate `memory-profiler` into Rack applications
16
+ for automatic memory leak detection.
@@ -0,0 +1,70 @@
1
+ # Rack Integration
2
+
3
+ This guide explains how to integrate `memory-profiler` into Rack applications for automatic memory leak detection.
4
+
5
+ ## Overview
6
+
7
+ The Rack middleware pattern provides a clean way to add memory monitoring to your application. The sampler runs in a background thread, automatically detecting leaks without impacting request processing.
8
+
9
+ ## Basic Middleware
10
+
11
+ Create a middleware that monitors memory in the background:
12
+
13
+ ~~~ ruby
14
+ # app/middleware/memory_monitoring.rb
15
+ require 'console'
16
+ require 'memory/profiler'
17
+
18
+ class MemoryMonitoring
19
+ def initialize(app)
20
+ @app = app
21
+
22
+ # Create sampler with automatic leak detection:
23
+ @sampler = Memory::Profiler::Sampler.new(
24
+ # Use up to 10 caller locations for leak call graph analysis:
25
+ depth: 10,
26
+ # Enable detailed tracking after 10 increases:
27
+ increases_threshold: 10
28
+ )
29
+
30
+ @sampler.start
31
+ Console.info("Memory monitoring enabled")
32
+
33
+ # Background thread runs periodic sampling:
34
+ @thread = Thread.new do
35
+ @sampler.run(interval: 60) do |sample|
36
+ Console.warn(sample.target, "Memory usage increased!", sample: sample)
37
+
38
+ # After threshold, get leak sources:
39
+ if sample.increases >= 10
40
+ if statistics = @sampler.statistics(sample.target)
41
+ Console.error(sample.target, "Memory leak analysis:", statistics: statistics)
42
+ end
43
+ end
44
+ end
45
+ end
46
+ end
47
+
48
+ def call(env)
49
+ @app.call(env)
50
+ end
51
+
52
+ def shutdown
53
+ @thread&.kill
54
+ @sampler&.stop!
55
+ end
56
+ end
57
+ ~~~
58
+
59
+ ## Adding to config.ru
60
+
61
+ Add the middleware to your Rack application:
62
+
63
+ ~~~ ruby
64
+ # config.ru
65
+ require_relative 'app/middleware/memory_monitoring'
66
+
67
+ use MemoryMonitoring
68
+
69
+ run YourApp
70
+ ~~~
@@ -3,6 +3,7 @@
3
3
 
4
4
  #include "capture.h"
5
5
  #include "allocations.h"
6
+ #include "queue.h"
6
7
 
7
8
  #include "ruby.h"
8
9
  #include "ruby/debug.h"
@@ -20,6 +21,18 @@ static VALUE Memory_Profiler_Capture = Qnil;
20
21
  static VALUE sym_newobj;
21
22
  static VALUE sym_freeobj;
22
23
 
24
+ // Queue item - freed object data to be processed after GC
25
+ struct Memory_Profiler_Queue_Item {
26
+ // The class of the freed object:
27
+ VALUE klass;
28
+
29
+ // The Allocations wrapper:
30
+ VALUE allocations;
31
+
32
+ // The state returned from callback on newobj:
33
+ VALUE state;
34
+ };
35
+
23
36
  // Main capture state
24
37
  struct Memory_Profiler_Capture {
25
38
  // class => VALUE (wrapped Memory_Profiler_Capture_Allocations).
@@ -27,6 +40,12 @@ struct Memory_Profiler_Capture {
27
40
 
28
41
  // Is tracking enabled (via start/stop):
29
42
  int enabled;
43
+
44
+ // Queue for freed objects (processed after GC via postponed job)
45
+ struct Memory_Profiler_Queue freed_queue;
46
+
47
+ // Handle for the postponed job
48
+ rb_postponed_job_handle_t postponed_job_handle;
30
49
  };
31
50
 
32
51
  // GC mark callback for tracked_classes table
@@ -53,6 +72,17 @@ static void Memory_Profiler_Capture_mark(void *ptr) {
53
72
  if (capture->tracked_classes) {
54
73
  st_foreach(capture->tracked_classes, Memory_Profiler_Capture_tracked_classes_mark, 0);
55
74
  }
75
+
76
+ // Mark freed objects in the queue:
77
+ for (size_t i = 0; i < capture->freed_queue.count; i++) {
78
+ struct Memory_Profiler_Queue_Item *freed = Memory_Profiler_Queue_at(&capture->freed_queue, i);
79
+ rb_gc_mark_movable(freed->klass);
80
+ rb_gc_mark_movable(freed->allocations);
81
+
82
+ if (freed->state) {
83
+ rb_gc_mark_movable(freed->state);
84
+ }
85
+ }
56
86
  }
57
87
 
58
88
  // GC free function
@@ -63,6 +93,9 @@ static void Memory_Profiler_Capture_free(void *ptr) {
63
93
  st_free_table(capture->tracked_classes);
64
94
  }
65
95
 
96
+ // Free the queue (elements are stored directly, just free the queue)
97
+ Memory_Profiler_Queue_free(&capture->freed_queue);
98
+
66
99
  xfree(capture);
67
100
  }
68
101
 
@@ -75,6 +108,9 @@ static size_t Memory_Profiler_Capture_memsize(const void *ptr) {
75
108
  size += capture->tracked_classes->num_entries * (sizeof(st_data_t) + sizeof(struct Memory_Profiler_Capture_Allocations));
76
109
  }
77
110
 
111
+ // Add size of freed queue (elements stored directly)
112
+ size += capture->freed_queue.capacity * capture->freed_queue.element_size;
113
+
78
114
  return size;
79
115
  }
80
116
 
@@ -113,6 +149,16 @@ static void Memory_Profiler_Capture_compact(void *ptr) {
113
149
  rb_raise(rb_eRuntimeError, "tracked_classes modified during GC compaction");
114
150
  }
115
151
  }
152
+
153
+ // Update freed objects in the queue
154
+ for (size_t i = 0; i < capture->freed_queue.count; i++) {
155
+ struct Memory_Profiler_Queue_Item *freed = Memory_Profiler_Queue_at(&capture->freed_queue, i);
156
+
157
+ // Update all VALUEs if they moved during compaction
158
+ freed->klass = rb_gc_location(freed->klass);
159
+ freed->allocations = rb_gc_location(freed->allocations);
160
+ freed->state = rb_gc_location(freed->state);
161
+ }
116
162
  }
117
163
 
118
164
  static const rb_data_type_t Memory_Profiler_Capture_type = {
@@ -144,6 +190,36 @@ const char *event_flag_name(rb_event_flag_t event_flag) {
144
190
  }
145
191
  }
146
192
 
193
+ // Postponed job callback - processes queued freed objects
194
+ // This runs after GC completes, when it's safe to call Ruby code
195
+ static void Memory_Profiler_Capture_process_freed_queue(void *arg) {
196
+ VALUE self = (VALUE)arg;
197
+ struct Memory_Profiler_Capture *capture;
198
+ TypedData_Get_Struct(self, struct Memory_Profiler_Capture, &Memory_Profiler_Capture_type, capture);
199
+
200
+ if (DEBUG) {
201
+ fprintf(stderr, "Processing freed queue with %zu entries\n", capture->freed_queue.count);
202
+ }
203
+
204
+ // Process all freed objects in the queue
205
+ for (size_t i = 0; i < capture->freed_queue.count; i++) {
206
+ struct Memory_Profiler_Queue_Item *freed = Memory_Profiler_Queue_at(&capture->freed_queue, i);
207
+ VALUE klass = freed->klass;
208
+ VALUE allocations = freed->allocations;
209
+ VALUE state = freed->state;
210
+
211
+ struct Memory_Profiler_Capture_Allocations *record = Memory_Profiler_Allocations_get(allocations);
212
+
213
+ // Call the Ruby callback with (klass, :freeobj, state)
214
+ if (!NIL_P(record->callback)) {
215
+ rb_funcall(record->callback, rb_intern("call"), 3, klass, sym_freeobj, state);
216
+ }
217
+ }
218
+
219
+ // Clear the queue (elements are reused on next cycle)
220
+ Memory_Profiler_Queue_clear(&capture->freed_queue);
221
+ }
222
+
147
223
  // Handler for NEWOBJ event
148
224
  static void Memory_Profiler_Capture_newobj_handler(VALUE self, struct Memory_Profiler_Capture *capture, VALUE klass, VALUE object) {
149
225
  st_data_t allocations_data;
@@ -193,19 +269,35 @@ static void Memory_Profiler_Capture_newobj_handler(VALUE self, struct Memory_Pro
193
269
  }
194
270
 
195
271
  // Handler for FREEOBJ event
272
+ // CRITICAL: This runs during GC when no Ruby code can be executed!
273
+ // We MUST NOT call rb_funcall or any Ruby code here - just queue the work.
196
274
  static void Memory_Profiler_Capture_freeobj_handler(VALUE self, struct Memory_Profiler_Capture *capture, VALUE klass, VALUE object) {
197
275
  st_data_t allocations_data;
198
276
  if (st_lookup(capture->tracked_classes, (st_data_t)klass, &allocations_data)) {
199
277
  VALUE allocations = (VALUE)allocations_data;
200
278
  struct Memory_Profiler_Capture_Allocations *record = Memory_Profiler_Allocations_get(allocations);
201
279
  record->free_count++;
280
+
281
+ // If we have a callback and detailed tracking, queue the freeobj for later processing
202
282
  if (!NIL_P(record->callback) && record->object_states) {
203
283
  // Look up state stored during NEWOBJ
204
284
  st_data_t state_data;
205
285
  if (st_delete(record->object_states, (st_data_t *)&object, &state_data)) {
206
286
  VALUE state = (VALUE)state_data;
207
- // Call with (klass, :freeobj, state)
208
- rb_funcall(record->callback, rb_intern("call"), 3, klass, sym_freeobj, state);
287
+
288
+ // Push a new item onto the queue (returns pointer to write to)
289
+ // NOTE: realloc is safe during GC (doesn't trigger Ruby allocation)
290
+ struct Memory_Profiler_Queue_Item *freed = Memory_Profiler_Queue_push(&capture->freed_queue);
291
+ if (freed) {
292
+ // Write directly to the allocated space
293
+ freed->klass = klass;
294
+ freed->allocations = allocations;
295
+ freed->state = state;
296
+
297
+ // Trigger postponed job to process the queue after GC
298
+ rb_postponed_job_trigger(capture->postponed_job_handle);
299
+ }
300
+ // If push failed (out of memory), silently drop this freeobj event
209
301
  }
210
302
  }
211
303
  }
@@ -296,6 +388,21 @@ static VALUE Memory_Profiler_Capture_alloc(VALUE klass) {
296
388
 
297
389
  capture->enabled = 0;
298
390
 
391
+ // Initialize the freed object queue
392
+ Memory_Profiler_Queue_initialize(&capture->freed_queue, sizeof(struct Memory_Profiler_Queue_Item));
393
+
394
+ // Pre-register the postponed job for processing freed objects
395
+ // The job will be triggered whenever we queue freed objects during GC
396
+ capture->postponed_job_handle = rb_postponed_job_preregister(
397
+ 0, // flags
398
+ Memory_Profiler_Capture_process_freed_queue,
399
+ (void *)obj
400
+ );
401
+
402
+ if (capture->postponed_job_handle == POSTPONED_JOB_HANDLE_INVALID) {
403
+ rb_raise(rb_eRuntimeError, "Failed to register postponed job!");
404
+ }
405
+
299
406
  return obj;
300
407
  }
301
408
 
@@ -0,0 +1,122 @@
1
+ // Released under the MIT License.
2
+ // Copyright, 2025, by Samuel Williams.
3
+
4
+ // Provides a simple queue for storing elements directly (not as pointers).
5
+ // Elements are enqueued during GC and batch-processed afterward.
6
+
7
+ #pragma once
8
+
9
+ #include <stdlib.h>
10
+ #include <string.h>
11
+ #include <assert.h>
12
+
13
+ static const size_t MEMORY_PROFILER_QUEUE_DEFAULT_COUNT = 128;
14
+
15
+ struct Memory_Profiler_Queue {
16
+ // The queue storage (elements stored directly, not as pointers):
17
+ void *base;
18
+
19
+ // The allocated capacity (number of elements):
20
+ size_t capacity;
21
+
22
+ // The number of used elements:
23
+ size_t count;
24
+
25
+ // The size of each element in bytes:
26
+ size_t element_size;
27
+ };
28
+
29
+ // Initialize an empty queue
30
+ inline static void Memory_Profiler_Queue_initialize(struct Memory_Profiler_Queue *queue, size_t element_size)
31
+ {
32
+ queue->base = NULL;
33
+ queue->capacity = 0;
34
+ queue->count = 0;
35
+ queue->element_size = element_size;
36
+ }
37
+
38
+ // Free the queue and its contents
39
+ inline static void Memory_Profiler_Queue_free(struct Memory_Profiler_Queue *queue)
40
+ {
41
+ if (queue->base) {
42
+ free(queue->base);
43
+ queue->base = NULL;
44
+ }
45
+
46
+ queue->capacity = 0;
47
+ queue->count = 0;
48
+ }
49
+
50
+ // Resize the queue to have at least the given capacity
51
+ inline static int Memory_Profiler_Queue_resize(struct Memory_Profiler_Queue *queue, size_t required_capacity)
52
+ {
53
+ if (required_capacity <= queue->capacity) {
54
+ // Already big enough:
55
+ return 0;
56
+ }
57
+
58
+ size_t new_capacity = queue->capacity;
59
+
60
+ // If the queue is empty, we need to set the initial size:
61
+ if (new_capacity == 0) {
62
+ new_capacity = MEMORY_PROFILER_QUEUE_DEFAULT_COUNT;
63
+ }
64
+
65
+ // Double until we reach required capacity
66
+ while (new_capacity < required_capacity) {
67
+ // Check for overflow
68
+ if (new_capacity > (SIZE_MAX / (2 * queue->element_size))) {
69
+ return -1; // Would overflow
70
+ }
71
+ new_capacity *= 2;
72
+ }
73
+
74
+ // Check final size doesn't overflow
75
+ if (new_capacity > (SIZE_MAX / queue->element_size)) {
76
+ return -1; // Too large
77
+ }
78
+
79
+ // Reallocate
80
+ void *new_base = realloc(queue->base, new_capacity * queue->element_size);
81
+ if (new_base == NULL) {
82
+ return -1; // Allocation failed
83
+ }
84
+
85
+ queue->base = new_base;
86
+ queue->capacity = new_capacity;
87
+
88
+ return 1; // Success
89
+ }
90
+
91
+ // Push a new element onto the end of the queue, returning pointer to the allocated space
92
+ // WARNING: The returned pointer is only valid until the next push operation
93
+ inline static void* Memory_Profiler_Queue_push(struct Memory_Profiler_Queue *queue)
94
+ {
95
+ // Ensure we have capacity
96
+ size_t new_count = queue->count + 1;
97
+ if (new_count > queue->capacity) {
98
+ if (Memory_Profiler_Queue_resize(queue, new_count) == -1) {
99
+ return NULL;
100
+ }
101
+ }
102
+
103
+ // Calculate pointer to the new element
104
+ void *element = (char*)queue->base + (queue->count * queue->element_size);
105
+ queue->count++;
106
+
107
+ return element;
108
+ }
109
+
110
+ // Clear the queue (reset count to 0, reusing allocated memory)
111
+ inline static void Memory_Profiler_Queue_clear(struct Memory_Profiler_Queue *queue)
112
+ {
113
+ queue->count = 0;
114
+ }
115
+
116
+ // Get element at index (for iteration)
117
+ // WARNING: Do not hold these pointers across push operations
118
+ inline static void* Memory_Profiler_Queue_at(struct Memory_Profiler_Queue *queue, size_t index)
119
+ {
120
+ assert(index < queue->count);
121
+ return (char*)queue->base + (index * queue->element_size);
122
+ }
@@ -7,7 +7,7 @@
7
7
  module Memory
8
8
  # @namespace
9
9
  module Profiler
10
- VERSION = "1.1.2"
10
+ VERSION = "1.1.3"
11
11
  end
12
12
  end
13
13
 
data/readme.md CHANGED
@@ -9,14 +9,14 @@ Efficient memory allocation tracking focused on **retained objects only**. Autom
9
9
  - **Retained Objects Only**: Uses `RUBY_INTERNAL_EVENT_NEWOBJ` and `RUBY_INTERNAL_EVENT_FREEOBJ` to automatically track only objects that survive GC.
10
10
  - **O(1) Live Counts**: Maintains per-class counters updated on alloc/free - no heap enumeration needed\!
11
11
  - **Tree-Based Analysis**: Deduplicates common call paths using an efficient tree structure.
12
- - **Native C Extension**: **Required** - uses Ruby internal events not available in pure Ruby.
13
- - **Configurable Depth**: Control how deep to capture call stacks.
14
12
 
15
13
  ## Usage
16
14
 
17
15
  Please see the [project documentation](https://socketry.github.io/memory-profiler/) for more details.
18
16
 
19
- - [Getting Started](https://socketry.github.io/memory-profiler/guides/getting-started/index) - This guide explains how to use `memory-profiler` to detect and diagnose memory leaks in Ruby applications.
17
+ - [Getting Started](https://socketry.github.io/memory-profiler/guides/getting-started/index) - This guide explains how to use `memory-profiler` to automatically detect and diagnose memory leaks in Ruby applications.
18
+
19
+ - [Rack Integration](https://socketry.github.io/memory-profiler/guides/rack-integration/index) - This guide explains how to integrate `memory-profiler` into Rack applications for automatic memory leak detection.
20
20
 
21
21
  ## Releases
22
22
 
data.tar.gz.sig CHANGED
Binary file
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: memory-profiler
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.1.2
4
+ version: 1.1.3
5
5
  platform: ruby
6
6
  authors:
7
7
  - Samuel Williams
@@ -45,12 +45,14 @@ extra_rdoc_files: []
45
45
  files:
46
46
  - context/getting-started.md
47
47
  - context/index.yaml
48
+ - context/rack-integration.md
48
49
  - ext/extconf.rb
49
50
  - ext/memory/profiler/allocations.c
50
51
  - ext/memory/profiler/allocations.h
51
52
  - ext/memory/profiler/capture.c
52
53
  - ext/memory/profiler/capture.h
53
54
  - ext/memory/profiler/profiler.c
55
+ - ext/memory/profiler/queue.h
54
56
  - lib/memory/profiler.rb
55
57
  - lib/memory/profiler/call_tree.rb
56
58
  - lib/memory/profiler/capture.rb
metadata.gz.sig CHANGED
Binary file