memory-profiler 1.1.2 → 1.1.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 06ea6c4892f9d17ac8fa69ba374d2a6a94ab8b809be840b26cb544b18b529edc
4
- data.tar.gz: a1c283a60721f988cac65878551eba4472a4dd889649d42b1409ba9a7b8b6756
3
+ metadata.gz: c70385bac57de768e1f6eb3ca9044942a357d77c97967b30636acd89b1f051f4
4
+ data.tar.gz: 8c7ae2a117684e61d34e09e9b2ebb4a99ab440b1b31fee364d2a92bc6caf5cac
5
5
  SHA512:
6
- metadata.gz: 22a34ac3cd4ad0c0eead78deef38cd79f20f6c96338f0af5eff191a3d2f2d99ea7dc44a40993838ead2678cb52a257fbed2548e08248e5e389277801c5895cae
7
- data.tar.gz: 49d267900c65e5481b6d784766e17721ba9919b984dcaf84737022622f4bbb9379773a0c98166a5d2b767192c787dd9fd13a8c43e98db3abd8d3dcb86aec74c5
6
+ metadata.gz: 50d848d974700ab480c805c82decf0fa58806be6f087ab4a199fa6b88963c303e459ca7a3ea2dabfea9d05bc55972edc1912917f49b46dc2752f1ea8f6d23f4b
7
+ data.tar.gz: 248a8fded6707058ccea2f9e1ee8eef074af4f8afeb5abfa9e786ef0e8c05efbd12fc8139eb887a156a84eaa05aba35f4dec1907426153ba935d0339198a5532
checksums.yaml.gz.sig CHANGED
Binary file
@@ -1,6 +1,6 @@
1
1
  # Getting Started
2
2
 
3
- This guide explains how to use `memory-profiler` to detect and diagnose memory leaks in Ruby applications.
3
+ This guide explains how to use `memory-profiler` to automatically detect and diagnose memory leaks in Ruby applications.
4
4
 
5
5
  ## Installation
6
6
 
@@ -12,218 +12,127 @@ $ bundle add memory-profiler
12
12
 
13
13
  ## Core Concepts
14
14
 
15
- Memory leaks happen when your application creates objects that should be garbage collected but remain referenced indefinitely. Over time, this causes memory usage to grow unbounded, eventually leading to performance degradation or out-of-memory crashes.
15
+ Memory leaks happen when your application creates objects that should be garbage collected but remain referenced indefinitely. Over time, this causes unbounded memory growth, leading to performance degradation or crashes.
16
16
 
17
- `memory-profiler` helps you find memory leaks by tracking object allocations in real-time:
18
-
19
- - **{ruby Memory::Profiler::Capture}** monitors allocations using Ruby's internal NEWOBJ/FREEOBJ events.
20
- - **{ruby Memory::Profiler::CallTree}** aggregates allocation call paths to identify leak sources.
17
+ - {ruby Memory::Profiler::Capture} monitors allocations using Ruby's internal NEWOBJ/FREEOBJ events.
18
+ - {ruby Memory::Profiler::CallTree} aggregates allocation call paths to identify leak sources.
21
19
  - **No heap enumeration** - uses O(1) counters updated automatically by the VM.
22
20
 
23
- ## Usage
24
-
25
- ### Monitor Memory Growth
21
+ ## Basic Usage
26
22
 
27
- Start by identifying which classes are accumulating objects:
23
+ The simplest way to detect memory leaks is to run the automatic sampler:
28
24
 
29
25
  ~~~ ruby
30
26
  require 'memory/profiler'
31
27
 
32
- # Create a capture instance:
33
- capture = Memory::Profiler::Capture.new
28
+ # Create a sampler that monitors all allocations:
29
+ sampler = Memory::Profiler::Sampler.new(
30
+ # Call stack depth for analysis:
31
+ depth: 10,
34
32
 
35
- # Start tracking all object allocations:
36
- capture.start
33
+ # Enable detailed tracking after 10 increases:
34
+ increases_threshold: 10
35
+ )
37
36
 
38
- # Run your application code...
39
- run_your_app
37
+ sampler.start
40
38
 
41
- # Check live object counts for common classes:
42
- puts "Hashes: #{capture.count_for(Hash)}"
43
- puts "Arrays: #{capture.count_for(Array)}"
44
- puts "Strings: #{capture.count_for(String)}"
39
+ # Run periodic sampling in a background thread:
40
+ Thread.new do
41
+ sampler.run(interval: 60) do |sample|
42
+ puts "⚠️ #{sample.target} growing: #{sample.current_size} objects (#{sample.increases} increases)"
43
+
44
+ # After 10 increases, detailed statistics are automatically available:
45
+ if sample.increases >= 10
46
+ statistics = sampler.statistics(sample.target)
47
+ puts "Top leak sources:"
48
+ statistics[:top_paths].each do |path_data|
49
+ puts " #{path_data[:count]}x from: #{path_data[:path].first}"
50
+ end
51
+ end
52
+ end
53
+ end
45
54
 
46
- capture.stop
55
+ # Your application runs here...
56
+ objects = []
57
+ while true
58
+ # Simulate a memory leak:
59
+ objects << Hash.new
60
+ sleep 0.1
61
+ end
47
62
  ~~~
48
63
 
49
- **What this tells you**: Which object types are growing over time. If Hash count keeps increasing across multiple samples, you likely have a Hash leak.
64
+ **What happens:**
65
+ 1. Sampler automatically tracks every class that allocates objects.
66
+ 2. Every 60 seconds, checks if any class grew significantly (>1000 objects).
67
+ 3. Reports growth via the block you provide.
68
+ 4. After 10 sustained increases, automatically captures call paths.
69
+ 5. You can then query `statistics(klass)` to find leak sources.
50
70
 
51
- ### Find the Leak Source
71
+ ## Manual Investigation
52
72
 
53
- Once you've identified a leaking class, use call path analysis to find WHERE allocations come from:
73
+ If you already know which class is leaking, you can investigate immediately:
54
74
 
55
75
  ~~~ ruby
56
- # Create a sampler with call path analysis:
57
- sampler = Memory::Profiler::Sampler.new(depth: 10)
76
+ sampler = Memory::Profiler::Sampler.new(depth: 15)
77
+ sampler.start
58
78
 
59
- # Track the leaking class with analysis:
79
+ # Enable detailed tracking for specific class:
60
80
  sampler.track_with_analysis(Hash)
61
- sampler.start
62
81
 
63
82
  # Run code that triggers the leak:
64
- simulate_leak
83
+ 1000.times { process_request }
65
84
 
66
- # Analyze where allocations come from:
85
+ # Analyze:
67
86
  statistics = sampler.statistics(Hash)
68
87
 
69
- puts "Live objects: #{statistics[:live_count]}"
88
+ puts "Live Hashes: #{statistics[:live_count]}"
70
89
  puts "\nTop allocation sources:"
71
90
  statistics[:top_paths].first(5).each do |path_data|
72
- puts "\n#{path_data[:count]} allocations from:"
73
- path_data[:path].each { |frame| puts " #{frame}" }
91
+ puts "\n#{path_data[:count]} allocations from:"
92
+ path_data[:path].each { |frame| puts " #{frame}" }
74
93
  end
75
94
 
76
- sampler.stop
77
- ~~~
78
-
79
- **What this shows**: The complete call stacks that led to Hash allocations. Look for unexpected paths or paths that appear repeatedly.
80
-
81
- ## Real-World Example
82
-
83
- Let's say you notice your app's memory growing over time. Here's how to diagnose it:
84
-
85
- ~~~ ruby
86
- require 'memory/profiler'
87
-
88
- # Setup monitoring:
89
- capture = Memory::Profiler::Capture.new
90
- capture.start
91
-
92
- # Take baseline measurement:
93
- GC.start # Clean up old objects first
94
- baseline = {
95
- hashes: capture.count_for(Hash),
96
- arrays: capture.count_for(Array),
97
- strings: capture.count_for(String)
98
- }
99
-
100
- # Run your application for a period:
101
- # In production: sample periodically (every 60 seconds)
102
- # In development: run through typical workflows
103
- sleep 60
104
-
105
- # Check what grew:
106
- current = {
107
- hashes: capture.count_for(Hash),
108
- arrays: capture.count_for(Array),
109
- strings: capture.count_for(String)
110
- }
111
-
112
- # Report growth:
113
- current.each do |type, count|
114
- growth = count - baseline[type]
115
- if growth > 100
116
- puts "⚠️ #{type} grew by #{growth} objects"
117
- end
95
+ puts "\nHotspot frames:"
96
+ statistics[:hotspots].first(5).each do |location, count|
97
+ puts " #{location}: #{count}"
118
98
  end
119
99
 
120
- capture.stop
100
+ sampler.stop!
121
101
  ~~~
122
102
 
123
- If Hashes grew significantly, enable detailed tracking:
124
-
125
- ~~~ ruby
126
- # Create detailed sampler:
127
- sampler = Memory::Profiler::Sampler.new(depth: 15)
128
- sampler.track_with_analysis(Hash)
129
- sampler.start
130
-
131
- # Run suspicious code path:
132
- process_user_requests(1000)
133
-
134
- # Find the culprits:
135
- statistics = sampler.statistics(Hash)
136
- statistics[:top_paths].first(3).each_with_index do |path_data, i|
137
- puts "\n#{i+1}. #{path_data[:count]} Hash allocations:"
138
- path_data[:path].first(5).each { |frame| puts " #{frame}" }
139
- end
140
-
141
- sampler.stop
142
- ~~~
143
-
144
- ## Best Practices
145
-
146
- ### When Tracking in Production
147
-
148
- 1. **Start tracking AFTER startup**: Call `GC.start` before `capture.start` to avoid counting initialization objects
149
- 2. **Use count-only mode for monitoring**: `capture.track(Hash)` (no callback) has minimal overhead
150
- 3. **Enable analysis only when investigating**: Call path analysis has higher overhead
151
- 4. **Sample periodically**: Take measurements every 60 seconds rather than continuously
152
- 5. **Stop when done**: Always call `stop()` to remove event hooks
153
-
154
- ### Performance Considerations
155
-
156
- **Count-only tracking** (no callback):
157
- - Minimal overhead (~5-10% on allocation hotpath)
158
- - Safe for production monitoring
159
- - Tracks all classes automatically
160
-
161
- **Call path analysis** (with callback):
162
- - Higher overhead (captures `caller_locations` on every allocation)
163
- - Use during investigation, not continuous monitoring
164
- - Only track specific classes you're investigating
165
-
166
- ### Avoiding False Positives
167
-
168
- Objects allocated before tracking starts but freed after will show as negative or zero:
169
-
170
- ~~~ ruby
171
- # ❌ Wrong - counts existing objects:
172
- capture.start
173
- 100.times { {} }
174
- GC.start # Frees old + new objects → underflow
175
-
176
- # ✅ Right - clean slate first:
177
- GC.start # Clear old objects
178
- capture.start
179
- 100.times { {} }
180
- ~~~
181
-
182
- ## Common Scenarios
183
-
184
- ### Detecting Cache Leaks
185
-
186
- ~~~ ruby
187
- # Monitor your cache class:
188
- capture = Memory::Profiler::Capture.new
189
- capture.start
190
-
191
- cache_baseline = capture.count_for(CacheEntry)
192
-
193
- # Run for a period:
194
- sleep 300 # 5 minutes
195
-
196
- cache_current = capture.count_for(CacheEntry)
197
-
198
- if cache_current > cache_baseline * 2
199
- puts "⚠️ Cache is leaking! #{cache_current - cache_baseline} entries added"
200
- # Enable detailed tracking to find the source
201
- end
202
- ~~~
203
-
204
- ### Finding Retention in Request Processing
205
-
206
- ~~~ ruby
207
- # Track during request processing:
208
- sampler = Memory::Profiler::Sampler.new
209
- sampler.track_with_analysis(Hash)
210
- sampler.start
211
-
212
- # Process requests:
213
- 1000.times do
214
- process_request
215
- end
216
-
217
- # Check if Hashes are being retained:
218
- statistics = sampler.statistics(Hash)
219
-
220
- if statistics[:live_count] > 1000
221
- puts "Leaking #{statistics[:live_count]} Hashes per 1000 requests!"
222
- statistics[:top_paths].first(3).each do |path_data|
223
- puts "\n#{path_data[:count]}x from:"
224
- puts path_data[:path].join("\n ")
225
- end
226
- end
227
-
228
- sampler.stop
229
- ~~~
103
+ ## Understanding the Output
104
+
105
+ **Sample data** (from growth detection):
106
+ - `target`: The class showing growth
107
+ - `current_size`: Current live object count
108
+ - `increases`: Number of sustained growth events (>1000 objects each)
109
+ - `threshold`: Minimum growth to trigger an increase
110
+
111
+ **Statistics** (after detailed tracking enabled):
112
+ - `live_count`: Current retained objects
113
+ - `top_paths`: Complete call stacks ranked by allocation frequency
114
+ - `hotspots`: Individual frames aggregated across all paths
115
+
116
+ **Top paths** show WHERE objects are created:
117
+ ```
118
+ 50 allocations from:
119
+ app/services/processor.rb:45:in 'process_item'
120
+ app/workers/job.rb:23:in 'perform'
121
+ ```
122
+
123
+ **Hotspots** show which lines appear most across all paths:
124
+ ```
125
+ app/services/processor.rb:45: 150 ← This line in many different call stacks
126
+ ```
127
+
128
+ ## Performance Considerations
129
+
130
+ **Automatic mode** (recommended for production):
131
+ - Minimal overhead initially (just counting).
132
+ - Detailed tracking only enabled when leaks detected.
133
+ - 60-second sampling interval is non-intrusive.
134
+
135
+ **Manual tracking** (for investigation):
136
+ - Higher overhead (captures `caller_locations` on every allocation).
137
+ - Use during debugging, not continuous monitoring.
138
+ - Only track specific classes you're investigating.
data/context/index.yaml CHANGED
@@ -8,5 +8,9 @@ metadata:
8
8
  files:
9
9
  - path: getting-started.md
10
10
  title: Getting Started
11
- description: This guide explains how to use `memory-profiler` to detect and diagnose
12
- memory leaks in Ruby applications.
11
+ description: This guide explains how to use `memory-profiler` to automatically detect
12
+ and diagnose memory leaks in Ruby applications.
13
+ - path: rack-integration.md
14
+ title: Rack Integration
15
+ description: This guide explains how to integrate `memory-profiler` into Rack applications
16
+ for automatic memory leak detection.
@@ -0,0 +1,70 @@
1
+ # Rack Integration
2
+
3
+ This guide explains how to integrate `memory-profiler` into Rack applications for automatic memory leak detection.
4
+
5
+ ## Overview
6
+
7
+ The Rack middleware pattern provides a clean way to add memory monitoring to your application. The sampler runs in a background thread, automatically detecting leaks without impacting request processing.
8
+
9
+ ## Basic Middleware
10
+
11
+ Create a middleware that monitors memory in the background:
12
+
13
+ ~~~ ruby
14
+ # app/middleware/memory_monitoring.rb
15
+ require 'console'
16
+ require 'memory/profiler'
17
+
18
+ class MemoryMonitoring
19
+ def initialize(app)
20
+ @app = app
21
+
22
+ # Create sampler with automatic leak detection:
23
+ @sampler = Memory::Profiler::Sampler.new(
24
+ # Use up to 10 caller locations for leak call graph analysis:
25
+ depth: 10,
26
+ # Enable detailed tracking after 10 increases:
27
+ increases_threshold: 10
28
+ )
29
+
30
+ @sampler.start
31
+ Console.info("Memory monitoring enabled")
32
+
33
+ # Background thread runs periodic sampling:
34
+ @thread = Thread.new do
35
+ @sampler.run(interval: 60) do |sample|
36
+ Console.warn(sample.target, "Memory usage increased!", sample: sample)
37
+
38
+ # After threshold, get leak sources:
39
+ if sample.increases >= 10
40
+ if statistics = @sampler.statistics(sample.target)
41
+ Console.error(sample.target, "Memory leak analysis:", statistics: statistics)
42
+ end
43
+ end
44
+ end
45
+ end
46
+ end
47
+
48
+ def call(env)
49
+ @app.call(env)
50
+ end
51
+
52
+ def shutdown
53
+ @thread&.kill
54
+ @sampler&.stop!
55
+ end
56
+ end
57
+ ~~~
58
+
59
+ ## Adding to config.ru
60
+
61
+ Add the middleware to your Rack application:
62
+
63
+ ~~~ ruby
64
+ # config.ru
65
+ require_relative 'app/middleware/memory_monitoring'
66
+
67
+ use MemoryMonitoring
68
+
69
+ run YourApp
70
+ ~~~
@@ -12,8 +12,9 @@ static VALUE Memory_Profiler_Allocations = Qnil;
12
12
 
13
13
  // Helper to mark object_states table values
14
14
  static int Memory_Profiler_Allocations_object_states_mark(st_data_t key, st_data_t value, st_data_t arg) {
15
- VALUE object = (VALUE)key;
16
- rb_gc_mark_movable(object);
15
+ // Don't mark the object, we use it as a key but we never use it as an object, and we don't want to retain it.
16
+ // VALUE object = (VALUE)key;
17
+ // rb_gc_mark_movable(object);
17
18
 
18
19
  VALUE state = (VALUE)value;
19
20
  if (!NIL_P(state)) {
@@ -3,6 +3,7 @@
3
3
 
4
4
  #include "capture.h"
5
5
  #include "allocations.h"
6
+ #include "queue.h"
6
7
 
7
8
  #include "ruby.h"
8
9
  #include "ruby/debug.h"
@@ -12,6 +13,8 @@
12
13
 
13
14
  enum {
14
15
  DEBUG = 0,
16
+ DEBUG_FREED_QUEUE = 0,
17
+ DEBUG_STATE = 0,
15
18
  };
16
19
 
17
20
  static VALUE Memory_Profiler_Capture = Qnil;
@@ -20,6 +23,18 @@ static VALUE Memory_Profiler_Capture = Qnil;
20
23
  static VALUE sym_newobj;
21
24
  static VALUE sym_freeobj;
22
25
 
26
+ // Queue item - freed object data to be processed after GC
27
+ struct Memory_Profiler_Queue_Item {
28
+ // The class of the freed object:
29
+ VALUE klass;
30
+
31
+ // The Allocations wrapper:
32
+ VALUE allocations;
33
+
34
+ // The state returned from callback on newobj:
35
+ VALUE state;
36
+ };
37
+
23
38
  // Main capture state
24
39
  struct Memory_Profiler_Capture {
25
40
  // class => VALUE (wrapped Memory_Profiler_Capture_Allocations).
@@ -27,6 +42,12 @@ struct Memory_Profiler_Capture {
27
42
 
28
43
  // Is tracking enabled (via start/stop):
29
44
  int enabled;
45
+
46
+ // Queue for freed objects (processed after GC via postponed job)
47
+ struct Memory_Profiler_Queue freed_queue;
48
+
49
+ // Handle for the postponed job
50
+ rb_postponed_job_handle_t postponed_job_handle;
30
51
  };
31
52
 
32
53
  // GC mark callback for tracked_classes table
@@ -53,6 +74,17 @@ static void Memory_Profiler_Capture_mark(void *ptr) {
53
74
  if (capture->tracked_classes) {
54
75
  st_foreach(capture->tracked_classes, Memory_Profiler_Capture_tracked_classes_mark, 0);
55
76
  }
77
+
78
+ // Mark freed objects in the queue:
79
+ for (size_t i = 0; i < capture->freed_queue.count; i++) {
80
+ struct Memory_Profiler_Queue_Item *freed = Memory_Profiler_Queue_at(&capture->freed_queue, i);
81
+ rb_gc_mark_movable(freed->klass);
82
+ rb_gc_mark_movable(freed->allocations);
83
+
84
+ if (freed->state) {
85
+ rb_gc_mark_movable(freed->state);
86
+ }
87
+ }
56
88
  }
57
89
 
58
90
  // GC free function
@@ -63,6 +95,9 @@ static void Memory_Profiler_Capture_free(void *ptr) {
63
95
  st_free_table(capture->tracked_classes);
64
96
  }
65
97
 
98
+ // Free the queue (elements are stored directly, just free the queue)
99
+ Memory_Profiler_Queue_free(&capture->freed_queue);
100
+
66
101
  xfree(capture);
67
102
  }
68
103
 
@@ -75,6 +110,9 @@ static size_t Memory_Profiler_Capture_memsize(const void *ptr) {
75
110
  size += capture->tracked_classes->num_entries * (sizeof(st_data_t) + sizeof(struct Memory_Profiler_Capture_Allocations));
76
111
  }
77
112
 
113
+ // Add size of freed queue (elements stored directly)
114
+ size += capture->freed_queue.capacity * capture->freed_queue.element_size;
115
+
78
116
  return size;
79
117
  }
80
118
 
@@ -113,6 +151,16 @@ static void Memory_Profiler_Capture_compact(void *ptr) {
113
151
  rb_raise(rb_eRuntimeError, "tracked_classes modified during GC compaction");
114
152
  }
115
153
  }
154
+
155
+ // Update freed objects in the queue
156
+ for (size_t i = 0; i < capture->freed_queue.count; i++) {
157
+ struct Memory_Profiler_Queue_Item *freed = Memory_Profiler_Queue_at(&capture->freed_queue, i);
158
+
159
+ // Update all VALUEs if they moved during compaction
160
+ freed->klass = rb_gc_location(freed->klass);
161
+ freed->allocations = rb_gc_location(freed->allocations);
162
+ freed->state = rb_gc_location(freed->state);
163
+ }
116
164
  }
117
165
 
118
166
  static const rb_data_type_t Memory_Profiler_Capture_type = {
@@ -144,6 +192,34 @@ const char *event_flag_name(rb_event_flag_t event_flag) {
144
192
  }
145
193
  }
146
194
 
195
+ // Postponed job callback - processes queued freed objects
196
+ // This runs after GC completes, when it's safe to call Ruby code
197
+ static void Memory_Profiler_Capture_process_freed_queue(void *arg) {
198
+ VALUE self = (VALUE)arg;
199
+ struct Memory_Profiler_Capture *capture;
200
+ TypedData_Get_Struct(self, struct Memory_Profiler_Capture, &Memory_Profiler_Capture_type, capture);
201
+
202
+ if (DEBUG_FREED_QUEUE) fprintf(stderr, "Processing freed queue with %zu entries\n", capture->freed_queue.count);
203
+
204
+ // Process all freed objects in the queue
205
+ for (size_t i = 0; i < capture->freed_queue.count; i++) {
206
+ struct Memory_Profiler_Queue_Item *freed = Memory_Profiler_Queue_at(&capture->freed_queue, i);
207
+ VALUE klass = freed->klass;
208
+ VALUE allocations = freed->allocations;
209
+ VALUE state = freed->state;
210
+
211
+ struct Memory_Profiler_Capture_Allocations *record = Memory_Profiler_Allocations_get(allocations);
212
+
213
+ // Call the Ruby callback with (klass, :freeobj, state)
214
+ if (!NIL_P(record->callback)) {
215
+ rb_funcall(record->callback, rb_intern("call"), 3, klass, sym_freeobj, state);
216
+ }
217
+ }
218
+
219
+ // Clear the queue (elements are reused on next cycle)
220
+ Memory_Profiler_Queue_clear(&capture->freed_queue);
221
+ }
222
+
147
223
  // Handler for NEWOBJ event
148
224
  static void Memory_Profiler_Capture_newobj_handler(VALUE self, struct Memory_Profiler_Capture *capture, VALUE klass, VALUE object) {
149
225
  st_data_t allocations_data;
@@ -168,6 +244,9 @@ static void Memory_Profiler_Capture_newobj_handler(VALUE self, struct Memory_Pro
168
244
  if (!record->object_states) {
169
245
  record->object_states = st_init_numtable();
170
246
  }
247
+
248
+ if (DEBUG_STATE) fprintf(stderr, "Memory_Profiler_Capture_newobj_handler: Inserting state for object: %p (%s)\n", (void *)object, rb_class2name(klass));
249
+
171
250
  st_insert(record->object_states, (st_data_t)object, (st_data_t)state);
172
251
  // Notify GC about the state VALUE stored in the table
173
252
  RB_OBJ_WRITTEN(self, Qnil, state);
@@ -193,19 +272,42 @@ static void Memory_Profiler_Capture_newobj_handler(VALUE self, struct Memory_Pro
193
272
  }
194
273
 
195
274
  // Handler for FREEOBJ event
275
+ // CRITICAL: This runs during GC when no Ruby code can be executed!
276
+ // We MUST NOT call rb_funcall or any Ruby code here - just queue the work.
196
277
  static void Memory_Profiler_Capture_freeobj_handler(VALUE self, struct Memory_Profiler_Capture *capture, VALUE klass, VALUE object) {
197
278
  st_data_t allocations_data;
198
279
  if (st_lookup(capture->tracked_classes, (st_data_t)klass, &allocations_data)) {
199
280
  VALUE allocations = (VALUE)allocations_data;
200
281
  struct Memory_Profiler_Capture_Allocations *record = Memory_Profiler_Allocations_get(allocations);
201
282
  record->free_count++;
283
+
284
+ // If we have a callback and detailed tracking, queue the freeobj for later processing
202
285
  if (!NIL_P(record->callback) && record->object_states) {
286
+ if (DEBUG_STATE) fprintf(stderr, "Memory_Profiler_Capture_freeobj_handler: Looking up state for object: %p\n", (void *)object);
287
+
203
288
  // Look up state stored during NEWOBJ
204
289
  st_data_t state_data;
205
290
  if (st_delete(record->object_states, (st_data_t *)&object, &state_data)) {
291
+ if (DEBUG_STATE) fprintf(stderr, "Found state for object: %p\n", (void *)object);
206
292
  VALUE state = (VALUE)state_data;
207
- // Call with (klass, :freeobj, state)
208
- rb_funcall(record->callback, rb_intern("call"), 3, klass, sym_freeobj, state);
293
+
294
+ // Push a new item onto the queue (returns pointer to write to)
295
+ // NOTE: realloc is safe during GC (doesn't trigger Ruby allocation)
296
+ struct Memory_Profiler_Queue_Item *freed = Memory_Profiler_Queue_push(&capture->freed_queue);
297
+ if (freed) {
298
+ if (DEBUG_FREED_QUEUE) fprintf(stderr, "Queued freed object, queue size now: %zu/%zu\n", capture->freed_queue.count, capture->freed_queue.capacity);
299
+ // Write directly to the allocated space
300
+ freed->klass = klass;
301
+ freed->allocations = allocations;
302
+ freed->state = state;
303
+
304
+ // Trigger postponed job to process the queue after GC
305
+ if (DEBUG_FREED_QUEUE) fprintf(stderr, "Triggering postponed job to process the queue after GC\n");
306
+ rb_postponed_job_trigger(capture->postponed_job_handle);
307
+ } else {
308
+ if (DEBUG_FREED_QUEUE) fprintf(stderr, "Failed to queue freed object, out of memory\n");
309
+ }
310
+ // If push failed (out of memory), silently drop this freeobj event
209
311
  }
210
312
  }
211
313
  }
@@ -266,7 +368,12 @@ static void Memory_Profiler_Capture_event_callback(VALUE data, void *ptr) {
266
368
  if (!klass) return;
267
369
 
268
370
  if (DEBUG) {
269
- const char *klass_name = rb_class2name(klass);
371
+ // In events other than NEWOBJ, we are unable to allocate objects (due to GC), so we simply say "ignored":
372
+ const char *klass_name = "ignored";
373
+ if (event_flag == RUBY_INTERNAL_EVENT_NEWOBJ) {
374
+ klass_name = rb_class2name(klass);
375
+ }
376
+
270
377
  fprintf(stderr, "Memory_Profiler_Capture_event_callback: %s, Object: %p, Class: %p (%s)\n", event_flag_name(event_flag), (void *)object, (void *)klass, klass_name);
271
378
  }
272
379
 
@@ -296,6 +403,21 @@ static VALUE Memory_Profiler_Capture_alloc(VALUE klass) {
296
403
 
297
404
  capture->enabled = 0;
298
405
 
406
+ // Initialize the freed object queue
407
+ Memory_Profiler_Queue_initialize(&capture->freed_queue, sizeof(struct Memory_Profiler_Queue_Item));
408
+
409
+ // Pre-register the postponed job for processing freed objects
410
+ // The job will be triggered whenever we queue freed objects during GC
411
+ capture->postponed_job_handle = rb_postponed_job_preregister(
412
+ 0, // flags
413
+ Memory_Profiler_Capture_process_freed_queue,
414
+ (void *)obj
415
+ );
416
+
417
+ if (capture->postponed_job_handle == POSTPONED_JOB_HANDLE_INVALID) {
418
+ rb_raise(rb_eRuntimeError, "Failed to register postponed job!");
419
+ }
420
+
299
421
  return obj;
300
422
  }
301
423
 
@@ -342,6 +464,7 @@ static VALUE Memory_Profiler_Capture_stop(VALUE self) {
342
464
  // Add a class to track with optional callback
343
465
  // Usage: track(klass) or track(klass) { |obj, klass| ... }
344
466
  // Callback can call caller_locations with desired depth
467
+ // Returns the Allocations object for the tracked class
345
468
  static VALUE Memory_Profiler_Capture_track(int argc, VALUE *argv, VALUE self) {
346
469
  struct Memory_Profiler_Capture *capture;
347
470
  TypedData_Get_Struct(self, struct Memory_Profiler_Capture, &Memory_Profiler_Capture_type, capture);
@@ -350,8 +473,10 @@ static VALUE Memory_Profiler_Capture_track(int argc, VALUE *argv, VALUE self) {
350
473
  rb_scan_args(argc, argv, "1&", &klass, &callback);
351
474
 
352
475
  st_data_t allocations_data;
476
+ VALUE allocations;
477
+
353
478
  if (st_lookup(capture->tracked_classes, (st_data_t)klass, &allocations_data)) {
354
- VALUE allocations = (VALUE)allocations_data;
479
+ allocations = (VALUE)allocations_data;
355
480
  struct Memory_Profiler_Capture_Allocations *record = Memory_Profiler_Allocations_get(allocations);
356
481
  RB_OBJ_WRITE(self, &record->callback, callback);
357
482
  } else {
@@ -362,7 +487,7 @@ static VALUE Memory_Profiler_Capture_track(int argc, VALUE *argv, VALUE self) {
362
487
  record->object_states = NULL;
363
488
 
364
489
  // Wrap the record in a VALUE
365
- VALUE allocations = Memory_Profiler_Allocations_wrap(record);
490
+ allocations = Memory_Profiler_Allocations_wrap(record);
366
491
 
367
492
  st_insert(capture->tracked_classes, (st_data_t)klass, (st_data_t)allocations);
368
493
  // Notify GC about the class VALUE stored as key in the table
@@ -375,7 +500,7 @@ static VALUE Memory_Profiler_Capture_track(int argc, VALUE *argv, VALUE self) {
375
500
  }
376
501
  }
377
502
 
378
- return self;
503
+ return allocations;
379
504
  }
380
505
 
381
506
  // Stop tracking a class
@@ -465,6 +590,19 @@ static VALUE Memory_Profiler_Capture_each(VALUE self) {
465
590
  return self;
466
591
  }
467
592
 
593
+ // Get allocations for a specific class
594
+ static VALUE Memory_Profiler_Capture_aref(VALUE self, VALUE klass) {
595
+ struct Memory_Profiler_Capture *capture;
596
+ TypedData_Get_Struct(self, struct Memory_Profiler_Capture, &Memory_Profiler_Capture_type, capture);
597
+
598
+ st_data_t allocations_data;
599
+ if (st_lookup(capture->tracked_classes, (st_data_t)klass, &allocations_data)) {
600
+ return (VALUE)allocations_data;
601
+ }
602
+
603
+ return Qnil;
604
+ }
605
+
468
606
  void Init_Memory_Profiler_Capture(VALUE Memory_Profiler)
469
607
  {
470
608
  // Initialize event symbols
@@ -484,6 +622,7 @@ void Init_Memory_Profiler_Capture(VALUE Memory_Profiler)
484
622
  rb_define_method(Memory_Profiler_Capture, "tracking?", Memory_Profiler_Capture_tracking_p, 1);
485
623
  rb_define_method(Memory_Profiler_Capture, "count_for", Memory_Profiler_Capture_count_for, 1);
486
624
  rb_define_method(Memory_Profiler_Capture, "each", Memory_Profiler_Capture_each, 0);
625
+ rb_define_method(Memory_Profiler_Capture, "[]", Memory_Profiler_Capture_aref, 1);
487
626
  rb_define_method(Memory_Profiler_Capture, "clear", Memory_Profiler_Capture_clear, 0);
488
627
 
489
628
  // Initialize Allocations class
@@ -0,0 +1,122 @@
1
+ // Released under the MIT License.
2
+ // Copyright, 2025, by Samuel Williams.
3
+
4
+ // Provides a simple queue for storing elements directly (not as pointers).
5
+ // Elements are enqueued during GC and batch-processed afterward.
6
+
7
+ #pragma once
8
+
9
+ #include <stdlib.h>
10
+ #include <string.h>
11
+ #include <assert.h>
12
+
13
+ static const size_t MEMORY_PROFILER_QUEUE_DEFAULT_COUNT = 128;
14
+
15
+ struct Memory_Profiler_Queue {
16
+ // The queue storage (elements stored directly, not as pointers):
17
+ void *base;
18
+
19
+ // The allocated capacity (number of elements):
20
+ size_t capacity;
21
+
22
+ // The number of used elements:
23
+ size_t count;
24
+
25
+ // The size of each element in bytes:
26
+ size_t element_size;
27
+ };
28
+
29
+ // Initialize an empty queue
30
+ inline static void Memory_Profiler_Queue_initialize(struct Memory_Profiler_Queue *queue, size_t element_size)
31
+ {
32
+ queue->base = NULL;
33
+ queue->capacity = 0;
34
+ queue->count = 0;
35
+ queue->element_size = element_size;
36
+ }
37
+
38
+ // Free the queue and its contents
39
+ inline static void Memory_Profiler_Queue_free(struct Memory_Profiler_Queue *queue)
40
+ {
41
+ if (queue->base) {
42
+ free(queue->base);
43
+ queue->base = NULL;
44
+ }
45
+
46
+ queue->capacity = 0;
47
+ queue->count = 0;
48
+ }
49
+
50
+ // Resize the queue to have at least the given capacity
51
+ inline static int Memory_Profiler_Queue_resize(struct Memory_Profiler_Queue *queue, size_t required_capacity)
52
+ {
53
+ if (required_capacity <= queue->capacity) {
54
+ // Already big enough:
55
+ return 0;
56
+ }
57
+
58
+ size_t new_capacity = queue->capacity;
59
+
60
+ // If the queue is empty, we need to set the initial size:
61
+ if (new_capacity == 0) {
62
+ new_capacity = MEMORY_PROFILER_QUEUE_DEFAULT_COUNT;
63
+ }
64
+
65
+ // Double until we reach required capacity
66
+ while (new_capacity < required_capacity) {
67
+ // Check for overflow
68
+ if (new_capacity > (SIZE_MAX / (2 * queue->element_size))) {
69
+ return -1; // Would overflow
70
+ }
71
+ new_capacity *= 2;
72
+ }
73
+
74
+ // Check final size doesn't overflow
75
+ if (new_capacity > (SIZE_MAX / queue->element_size)) {
76
+ return -1; // Too large
77
+ }
78
+
79
+ // Reallocate
80
+ void *new_base = realloc(queue->base, new_capacity * queue->element_size);
81
+ if (new_base == NULL) {
82
+ return -1; // Allocation failed
83
+ }
84
+
85
+ queue->base = new_base;
86
+ queue->capacity = new_capacity;
87
+
88
+ return 1; // Success
89
+ }
90
+
91
+ // Push a new element onto the end of the queue, returning pointer to the allocated space
92
+ // WARNING: The returned pointer is only valid until the next push operation
93
+ inline static void* Memory_Profiler_Queue_push(struct Memory_Profiler_Queue *queue)
94
+ {
95
+ // Ensure we have capacity
96
+ size_t new_count = queue->count + 1;
97
+ if (new_count > queue->capacity) {
98
+ if (Memory_Profiler_Queue_resize(queue, new_count) == -1) {
99
+ return NULL;
100
+ }
101
+ }
102
+
103
+ // Calculate pointer to the new element
104
+ void *element = (char*)queue->base + (queue->count * queue->element_size);
105
+ queue->count++;
106
+
107
+ return element;
108
+ }
109
+
110
+ // Clear the queue (reset count to 0, reusing allocated memory)
111
+ inline static void Memory_Profiler_Queue_clear(struct Memory_Profiler_Queue *queue)
112
+ {
113
+ queue->count = 0;
114
+ }
115
+
116
+ // Get element at index (for iteration)
117
+ // WARNING: Do not hold these pointers across push operations
118
+ inline static void* Memory_Profiler_Queue_at(struct Memory_Profiler_Queue *queue, size_t index)
119
+ {
120
+ assert(index < queue->count);
121
+ return (char*)queue->base + (index * queue->element_size);
122
+ }
@@ -141,7 +141,7 @@ module Memory
141
141
  if sample.sample!(count)
142
142
  # Check if we should enable detailed tracking
143
143
  if sample.increases >= @increases_threshold && !@call_trees.key?(klass)
144
- track_with_analysis_internal(klass, allocations)
144
+ track(klass, allocations)
145
145
  end
146
146
 
147
147
  # Notify about growth if block given
@@ -150,13 +150,19 @@ module Memory
150
150
  end
151
151
  end
152
152
 
153
- # Internal: Enable tracking with analysis using allocations object
154
- private def track_with_analysis_internal(klass, allocations)
153
+ # Start tracking with call path analysis.
154
+ #
155
+ # @parameter klass [Class] The class to track with detailed analysis.
156
+ def track(klass, allocations = nil)
157
+ # Track the class and get the allocations object
158
+ allocations ||= @capture.track(klass)
159
+
160
+ # Set up call tree for this class
155
161
  tree = @call_trees[klass] = CallTree.new
156
162
  depth = @depth
157
163
  filter = @filter
158
164
 
159
- # Register callback on allocations object with new signature:
165
+ # Register callback on allocations object:
160
166
  # - On :newobj - returns state (leaf node) which C extension stores
161
167
  # - On :freeobj - receives state back from C extension
162
168
  allocations.track do |klass, event, state|
@@ -166,43 +172,16 @@ module Memory
166
172
  locations = caller_locations(1, depth)
167
173
  filtered = locations.select(&filter)
168
174
  unless filtered.empty?
169
- # Record returns the leaf node - return it so C can store it
175
+ # Record returns the leaf node - return it so C can store it:
170
176
  tree.record(filtered)
171
177
  end
172
- # Return nil or the node - C will store whatever we return
178
+ # Return nil or the node - C will store whatever we return.
173
179
  when :freeobj
174
- # Decrement using the state (leaf node) passed back from C
175
- if state
176
- state.decrement_path!
177
- end
180
+ # Decrement using the state (leaf node) passed back from then native extension:
181
+ state&.decrement_path!
178
182
  end
179
183
  rescue Exception => error
180
- warn "Error in track_with_analysis_internal: #{error.message}\n#{error.backtrace.join("\n")}"
181
- end
182
- end
183
-
184
- # Start tracking allocations for a class (count only).
185
- def track(klass)
186
- return if @capture.tracking?(klass)
187
-
188
- @capture.track(klass)
189
- end
190
-
191
- # Start tracking with call path analysis.
192
- #
193
- # @parameter klass [Class] The class to track with detailed analysis.
194
- def track_with_analysis(klass)
195
- # Track the class if not already tracked
196
- unless @capture.tracking?(klass)
197
- @capture.track(klass)
198
- end
199
-
200
- # Enable analysis by setting callback on the allocations object
201
- @capture.each do |tracked_klass, allocations|
202
- if tracked_klass == klass
203
- track_with_analysis_internal(klass, allocations)
204
- break
205
- end
184
+ warn "Error in allocation tracking: #{error.message}\n#{error.backtrace.join("\n")}"
206
185
  end
207
186
  end
208
187
 
@@ -280,11 +259,7 @@ module Memory
280
259
  private
281
260
 
282
261
  def default_filter
283
- ->(location) {path = location.path
284
- !path.include?("/gems/") &&
285
- !path.include?("/ruby/") &&
286
- !path.start_with?("(eval)")
287
- }
262
+ ->(location) {!location.path.match?(%r{/(gems|ruby)/|\A\(eval\)})}
288
263
  end
289
264
  end
290
265
  end
@@ -7,7 +7,7 @@
7
7
  module Memory
8
8
  # @namespace
9
9
  module Profiler
10
- VERSION = "1.1.2"
10
+ VERSION = "1.1.4"
11
11
  end
12
12
  end
13
13
 
data/readme.md CHANGED
@@ -9,14 +9,14 @@ Efficient memory allocation tracking focused on **retained objects only**. Autom
9
9
  - **Retained Objects Only**: Uses `RUBY_INTERNAL_EVENT_NEWOBJ` and `RUBY_INTERNAL_EVENT_FREEOBJ` to automatically track only objects that survive GC.
10
10
  - **O(1) Live Counts**: Maintains per-class counters updated on alloc/free - no heap enumeration needed\!
11
11
  - **Tree-Based Analysis**: Deduplicates common call paths using an efficient tree structure.
12
- - **Native C Extension**: **Required** - uses Ruby internal events not available in pure Ruby.
13
- - **Configurable Depth**: Control how deep to capture call stacks.
14
12
 
15
13
  ## Usage
16
14
 
17
15
  Please see the [project documentation](https://socketry.github.io/memory-profiler/) for more details.
18
16
 
19
- - [Getting Started](https://socketry.github.io/memory-profiler/guides/getting-started/index) - This guide explains how to use `memory-profiler` to detect and diagnose memory leaks in Ruby applications.
17
+ - [Getting Started](https://socketry.github.io/memory-profiler/guides/getting-started/index) - This guide explains how to use `memory-profiler` to automatically detect and diagnose memory leaks in Ruby applications.
18
+
19
+ - [Rack Integration](https://socketry.github.io/memory-profiler/guides/rack-integration/index) - This guide explains how to integrate `memory-profiler` into Rack applications for automatic memory leak detection.
20
20
 
21
21
  ## Releases
22
22
 
data.tar.gz.sig CHANGED
Binary file
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: memory-profiler
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.1.2
4
+ version: 1.1.4
5
5
  platform: ruby
6
6
  authors:
7
7
  - Samuel Williams
@@ -45,12 +45,14 @@ extra_rdoc_files: []
45
45
  files:
46
46
  - context/getting-started.md
47
47
  - context/index.yaml
48
+ - context/rack-integration.md
48
49
  - ext/extconf.rb
49
50
  - ext/memory/profiler/allocations.c
50
51
  - ext/memory/profiler/allocations.h
51
52
  - ext/memory/profiler/capture.c
52
53
  - ext/memory/profiler/capture.h
53
54
  - ext/memory/profiler/profiler.c
55
+ - ext/memory/profiler/queue.h
54
56
  - lib/memory/profiler.rb
55
57
  - lib/memory/profiler/call_tree.rb
56
58
  - lib/memory/profiler/capture.rb
@@ -72,7 +74,7 @@ required_ruby_version: !ruby/object:Gem::Requirement
72
74
  requirements:
73
75
  - - ">="
74
76
  - !ruby/object:Gem::Version
75
- version: '3.2'
77
+ version: '3.3'
76
78
  required_rubygems_version: !ruby/object:Gem::Requirement
77
79
  requirements:
78
80
  - - ">="
metadata.gz.sig CHANGED
Binary file