memory-profiler 1.1.1 → 1.1.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: e818941dde11b9247c4451e48d59ded21116b7cbcd40d1885b200f9124a1adf4
4
- data.tar.gz: 1ad6e8a133b29d3fa6011c5f21c5b15dca3a2b1f3c8839f5294bce46a578d82e
3
+ metadata.gz: fdf7841a9d0249712c9bd140447e0063b5b543cae942bc1dedc09682472f3aaf
4
+ data.tar.gz: 14f9521f93447843b93aff1dfe15b12fce5f9491bb6a6055caf1d90cce304f17
5
5
  SHA512:
6
- metadata.gz: f8d61cf642ed4fd9d0b0f6ceae5f3e09f53351df3f134245870edce7d1fbc7eea2ca82b87aa92abc9d7308444a4a0be5aaadadb4d7d613840e05ec34df022117
7
- data.tar.gz: d67925a5753683b75eca7b49a861038cb28d52a537a04c459cb8808ddcb1c582f3512ecccbd922bd31f649101f3b726f3e60c5e6f5cf3072f25b5ee2a55e5a77
6
+ metadata.gz: 43ba6c482b4f9e6e80e5f3213435b158b954087790e19ac4effb77fe4d06d602027c6668f63aa0b68bd592c073ac112f98735da4249d68829913d1328bb21cb4
7
+ data.tar.gz: b30e212a1d6a315a4fc94d9dfad066d0f1e2ec4f73dbf7f5bf757bd85e6279ddb088e87168cb6cc158b99fcb218df5ec6f3e7ca60254c8a071478959cbda4817
checksums.yaml.gz.sig CHANGED
Binary file
@@ -1,6 +1,6 @@
1
1
  # Getting Started
2
2
 
3
- This guide explains how to use `memory-profiler` to detect and diagnose memory leaks in Ruby applications.
3
+ This guide explains how to use `memory-profiler` to automatically detect and diagnose memory leaks in Ruby applications.
4
4
 
5
5
  ## Installation
6
6
 
@@ -12,218 +12,127 @@ $ bundle add memory-profiler
12
12
 
13
13
  ## Core Concepts
14
14
 
15
- Memory leaks happen when your application creates objects that should be garbage collected but remain referenced indefinitely. Over time, this causes memory usage to grow unbounded, eventually leading to performance degradation or out-of-memory crashes.
15
+ Memory leaks happen when your application creates objects that should be garbage collected but remain referenced indefinitely. Over time, this causes unbounded memory growth, leading to performance degradation or crashes.
16
16
 
17
- `memory-profiler` helps you find memory leaks by tracking object allocations in real-time:
18
-
19
- - **{ruby Memory::Profiler::Capture}** monitors allocations using Ruby's internal NEWOBJ/FREEOBJ events.
20
- - **{ruby Memory::Profiler::CallTree}** aggregates allocation call paths to identify leak sources.
17
+ - {ruby Memory::Profiler::Capture} monitors allocations using Ruby's internal NEWOBJ/FREEOBJ events.
18
+ - {ruby Memory::Profiler::CallTree} aggregates allocation call paths to identify leak sources.
21
19
  - **No heap enumeration** - uses O(1) counters updated automatically by the VM.
22
20
 
23
- ## Usage
24
-
25
- ### Monitor Memory Growth
21
+ ## Basic Usage
26
22
 
27
- Start by identifying which classes are accumulating objects:
23
+ The simplest way to detect memory leaks is to run the automatic sampler:
28
24
 
29
25
  ~~~ ruby
30
26
  require 'memory/profiler'
31
27
 
32
- # Create a capture instance:
33
- capture = Memory::Profiler::Capture.new
28
+ # Create a sampler that monitors all allocations:
29
+ sampler = Memory::Profiler::Sampler.new(
30
+ # Call stack depth for analysis:
31
+ depth: 10,
34
32
 
35
- # Start tracking all object allocations:
36
- capture.start
33
+ # Enable detailed tracking after 10 increases:
34
+ increases_threshold: 10
35
+ )
37
36
 
38
- # Run your application code...
39
- run_your_app
37
+ sampler.start
40
38
 
41
- # Check live object counts for common classes:
42
- puts "Hashes: #{capture.count_for(Hash)}"
43
- puts "Arrays: #{capture.count_for(Array)}"
44
- puts "Strings: #{capture.count_for(String)}"
39
+ # Run periodic sampling in a background thread:
40
+ Thread.new do
41
+ sampler.run(interval: 60) do |sample|
42
+ puts "⚠️ #{sample.target} growing: #{sample.current_size} objects (#{sample.increases} increases)"
43
+
44
+ # After 10 increases, detailed statistics are automatically available:
45
+ if sample.increases >= 10
46
+ statistics = sampler.statistics(sample.target)
47
+ puts "Top leak sources:"
48
+ statistics[:top_paths].each do |path_data|
49
+ puts " #{path_data[:count]}x from: #{path_data[:path].first}"
50
+ end
51
+ end
52
+ end
53
+ end
45
54
 
46
- capture.stop
55
+ # Your application runs here...
56
+ objects = []
57
+ while true
58
+ # Simulate a memory leak:
59
+ objects << Hash.new
60
+ sleep 0.1
61
+ end
47
62
  ~~~
48
63
 
49
- **What this tells you**: Which object types are growing over time. If Hash count keeps increasing across multiple samples, you likely have a Hash leak.
64
+ **What happens:**
65
+ 1. Sampler automatically tracks every class that allocates objects.
66
+ 2. Every 60 seconds, checks if any class grew significantly (>1000 objects).
67
+ 3. Reports growth via the block you provide.
68
+ 4. After 10 sustained increases, automatically captures call paths.
69
+ 5. You can then query `statistics(klass)` to find leak sources.
50
70
 
51
- ### Find the Leak Source
71
+ ## Manual Investigation
52
72
 
53
- Once you've identified a leaking class, use call path analysis to find WHERE allocations come from:
73
+ If you already know which class is leaking, you can investigate immediately:
54
74
 
55
75
  ~~~ ruby
56
- # Create a sampler with call path analysis:
57
- sampler = Memory::Profiler::Sampler.new(depth: 10)
76
+ sampler = Memory::Profiler::Sampler.new(depth: 15)
77
+ sampler.start
58
78
 
59
- # Track the leaking class with analysis:
79
+ # Enable detailed tracking for specific class:
60
80
  sampler.track_with_analysis(Hash)
61
- sampler.start
62
81
 
63
82
  # Run code that triggers the leak:
64
- simulate_leak
83
+ 1000.times { process_request }
65
84
 
66
- # Analyze where allocations come from:
85
+ # Analyze:
67
86
  statistics = sampler.statistics(Hash)
68
87
 
69
- puts "Live objects: #{statistics[:live_count]}"
88
+ puts "Live Hashes: #{statistics[:live_count]}"
70
89
  puts "\nTop allocation sources:"
71
90
  statistics[:top_paths].first(5).each do |path_data|
72
- puts "\n#{path_data[:count]} allocations from:"
73
- path_data[:path].each { |frame| puts " #{frame}" }
91
+ puts "\n#{path_data[:count]} allocations from:"
92
+ path_data[:path].each { |frame| puts " #{frame}" }
74
93
  end
75
94
 
76
- sampler.stop
77
- ~~~
78
-
79
- **What this shows**: The complete call stacks that led to Hash allocations. Look for unexpected paths or paths that appear repeatedly.
80
-
81
- ## Real-World Example
82
-
83
- Let's say you notice your app's memory growing over time. Here's how to diagnose it:
84
-
85
- ~~~ ruby
86
- require 'memory/profiler'
87
-
88
- # Setup monitoring:
89
- capture = Memory::Profiler::Capture.new
90
- capture.start
91
-
92
- # Take baseline measurement:
93
- GC.start # Clean up old objects first
94
- baseline = {
95
- hashes: capture.count_for(Hash),
96
- arrays: capture.count_for(Array),
97
- strings: capture.count_for(String)
98
- }
99
-
100
- # Run your application for a period:
101
- # In production: sample periodically (every 60 seconds)
102
- # In development: run through typical workflows
103
- sleep 60
104
-
105
- # Check what grew:
106
- current = {
107
- hashes: capture.count_for(Hash),
108
- arrays: capture.count_for(Array),
109
- strings: capture.count_for(String)
110
- }
111
-
112
- # Report growth:
113
- current.each do |type, count|
114
- growth = count - baseline[type]
115
- if growth > 100
116
- puts "⚠️ #{type} grew by #{growth} objects"
117
- end
95
+ puts "\nHotspot frames:"
96
+ statistics[:hotspots].first(5).each do |location, count|
97
+ puts " #{location}: #{count}"
118
98
  end
119
99
 
120
- capture.stop
100
+ sampler.stop!
121
101
  ~~~
122
102
 
123
- If Hashes grew significantly, enable detailed tracking:
124
-
125
- ~~~ ruby
126
- # Create detailed sampler:
127
- sampler = Memory::Profiler::Sampler.new(depth: 15)
128
- sampler.track_with_analysis(Hash)
129
- sampler.start
130
-
131
- # Run suspicious code path:
132
- process_user_requests(1000)
133
-
134
- # Find the culprits:
135
- statistics = sampler.statistics(Hash)
136
- statistics[:top_paths].first(3).each_with_index do |path_data, i|
137
- puts "\n#{i+1}. #{path_data[:count]} Hash allocations:"
138
- path_data[:path].first(5).each { |frame| puts " #{frame}" }
139
- end
140
-
141
- sampler.stop
142
- ~~~
143
-
144
- ## Best Practices
145
-
146
- ### When Tracking in Production
147
-
148
- 1. **Start tracking AFTER startup**: Call `GC.start` before `capture.start` to avoid counting initialization objects
149
- 2. **Use count-only mode for monitoring**: `capture.track(Hash)` (no callback) has minimal overhead
150
- 3. **Enable analysis only when investigating**: Call path analysis has higher overhead
151
- 4. **Sample periodically**: Take measurements every 60 seconds rather than continuously
152
- 5. **Stop when done**: Always call `stop()` to remove event hooks
153
-
154
- ### Performance Considerations
155
-
156
- **Count-only tracking** (no callback):
157
- - Minimal overhead (~5-10% on allocation hotpath)
158
- - Safe for production monitoring
159
- - Tracks all classes automatically
160
-
161
- **Call path analysis** (with callback):
162
- - Higher overhead (captures `caller_locations` on every allocation)
163
- - Use during investigation, not continuous monitoring
164
- - Only track specific classes you're investigating
165
-
166
- ### Avoiding False Positives
167
-
168
- Objects allocated before tracking starts but freed after will show as negative or zero:
169
-
170
- ~~~ ruby
171
- # ❌ Wrong - counts existing objects:
172
- capture.start
173
- 100.times { {} }
174
- GC.start # Frees old + new objects → underflow
175
-
176
- # ✅ Right - clean slate first:
177
- GC.start # Clear old objects
178
- capture.start
179
- 100.times { {} }
180
- ~~~
181
-
182
- ## Common Scenarios
183
-
184
- ### Detecting Cache Leaks
185
-
186
- ~~~ ruby
187
- # Monitor your cache class:
188
- capture = Memory::Profiler::Capture.new
189
- capture.start
190
-
191
- cache_baseline = capture.count_for(CacheEntry)
192
-
193
- # Run for a period:
194
- sleep 300 # 5 minutes
195
-
196
- cache_current = capture.count_for(CacheEntry)
197
-
198
- if cache_current > cache_baseline * 2
199
- puts "⚠️ Cache is leaking! #{cache_current - cache_baseline} entries added"
200
- # Enable detailed tracking to find the source
201
- end
202
- ~~~
203
-
204
- ### Finding Retention in Request Processing
205
-
206
- ~~~ ruby
207
- # Track during request processing:
208
- sampler = Memory::Profiler::Sampler.new
209
- sampler.track_with_analysis(Hash)
210
- sampler.start
211
-
212
- # Process requests:
213
- 1000.times do
214
- process_request
215
- end
216
-
217
- # Check if Hashes are being retained:
218
- statistics = sampler.statistics(Hash)
219
-
220
- if statistics[:live_count] > 1000
221
- puts "Leaking #{statistics[:live_count]} Hashes per 1000 requests!"
222
- statistics[:top_paths].first(3).each do |path_data|
223
- puts "\n#{path_data[:count]}x from:"
224
- puts path_data[:path].join("\n ")
225
- end
226
- end
227
-
228
- sampler.stop
229
- ~~~
103
+ ## Understanding the Output
104
+
105
+ **Sample data** (from growth detection):
106
+ - `target`: The class showing growth
107
+ - `current_size`: Current live object count
108
+ - `increases`: Number of sustained growth events (>1000 objects each)
109
+ - `threshold`: Minimum growth to trigger an increase
110
+
111
+ **Statistics** (after detailed tracking enabled):
112
+ - `live_count`: Current retained objects
113
+ - `top_paths`: Complete call stacks ranked by allocation frequency
114
+ - `hotspots`: Individual frames aggregated across all paths
115
+
116
+ **Top paths** show WHERE objects are created:
117
+ ```
118
+ 50 allocations from:
119
+ app/services/processor.rb:45:in 'process_item'
120
+ app/workers/job.rb:23:in 'perform'
121
+ ```
122
+
123
+ **Hotspots** show which lines appear most across all paths:
124
+ ```
125
+ app/services/processor.rb:45: 150 ← This line in many different call stacks
126
+ ```
127
+
128
+ ## Performance Considerations
129
+
130
+ **Automatic mode** (recommended for production):
131
+ - Minimal overhead initially (just counting).
132
+ - Detailed tracking only enabled when leaks detected.
133
+ - 60-second sampling interval is non-intrusive.
134
+
135
+ **Manual tracking** (for investigation):
136
+ - Higher overhead (captures `caller_locations` on every allocation).
137
+ - Use during debugging, not continuous monitoring.
138
+ - Only track specific classes you're investigating.
data/context/index.yaml CHANGED
@@ -8,5 +8,9 @@ metadata:
8
8
  files:
9
9
  - path: getting-started.md
10
10
  title: Getting Started
11
- description: This guide explains how to use `memory-profiler` to detect and diagnose
12
- memory leaks in Ruby applications.
11
+ description: This guide explains how to use `memory-profiler` to automatically detect
12
+ and diagnose memory leaks in Ruby applications.
13
+ - path: rack-integration.md
14
+ title: Rack Integration
15
+ description: This guide explains how to integrate `memory-profiler` into Rack applications
16
+ for automatic memory leak detection.
@@ -0,0 +1,70 @@
1
+ # Rack Integration
2
+
3
+ This guide explains how to integrate `memory-profiler` into Rack applications for automatic memory leak detection.
4
+
5
+ ## Overview
6
+
7
+ The Rack middleware pattern provides a clean way to add memory monitoring to your application. The sampler runs in a background thread, automatically detecting leaks without impacting request processing.
8
+
9
+ ## Basic Middleware
10
+
11
+ Create a middleware that monitors memory in the background:
12
+
13
+ ~~~ ruby
14
+ # app/middleware/memory_monitoring.rb
15
+ require 'console'
16
+ require 'memory/profiler'
17
+
18
+ class MemoryMonitoring
19
+ def initialize(app)
20
+ @app = app
21
+
22
+ # Create sampler with automatic leak detection:
23
+ @sampler = Memory::Profiler::Sampler.new(
24
+ # Use up to 10 caller locations for leak call graph analysis:
25
+ depth: 10,
26
+ # Enable detailed tracking after 10 increases:
27
+ increases_threshold: 10
28
+ )
29
+
30
+ @sampler.start
31
+ Console.info("Memory monitoring enabled")
32
+
33
+ # Background thread runs periodic sampling:
34
+ @thread = Thread.new do
35
+ @sampler.run(interval: 60) do |sample|
36
+ Console.warn(sample.target, "Memory usage increased!", sample: sample)
37
+
38
+ # After threshold, get leak sources:
39
+ if sample.increases >= 10
40
+ if statistics = @sampler.statistics(sample.target)
41
+ Console.error(sample.target, "Memory leak analysis:", statistics: statistics)
42
+ end
43
+ end
44
+ end
45
+ end
46
+ end
47
+
48
+ def call(env)
49
+ @app.call(env)
50
+ end
51
+
52
+ def shutdown
53
+ @thread&.kill
54
+ @sampler&.stop!
55
+ end
56
+ end
57
+ ~~~
58
+
59
+ ## Adding to config.ru
60
+
61
+ Add the middleware to your Rack application:
62
+
63
+ ~~~ ruby
64
+ # config.ru
65
+ require_relative 'app/middleware/memory_monitoring'
66
+
67
+ use MemoryMonitoring
68
+
69
+ run YourApp
70
+ ~~~
data/ext/extconf.rb CHANGED
@@ -16,7 +16,7 @@ if ENV.key?("RUBY_DEBUG")
16
16
  append_cflags(["-DRUBY_DEBUG", "-O0"])
17
17
  end
18
18
 
19
- $srcs = ["memory/profiler/profiler.c", "memory/profiler/capture.c"]
19
+ $srcs = ["memory/profiler/profiler.c", "memory/profiler/capture.c", "memory/profiler/allocations.c"]
20
20
  $VPATH << "$(srcdir)/memory/profiler"
21
21
 
22
22
  # Check for required headers
@@ -0,0 +1,179 @@
1
+ // Released under the MIT License.
2
+ // Copyright, 2025, by Samuel Williams.
3
+
4
+ #include "allocations.h"
5
+
6
+ #include "ruby.h"
7
+ #include "ruby/debug.h"
8
+ #include "ruby/st.h"
9
+ #include <stdio.h>
10
+
11
+ static VALUE Memory_Profiler_Allocations = Qnil;
12
+
13
+ // Helper to mark object_states table values
14
+ static int Memory_Profiler_Allocations_object_states_mark(st_data_t key, st_data_t value, st_data_t arg) {
15
+ VALUE object = (VALUE)key;
16
+ rb_gc_mark_movable(object);
17
+
18
+ VALUE state = (VALUE)value;
19
+ if (!NIL_P(state)) {
20
+ rb_gc_mark_movable(state);
21
+ }
22
+ return ST_CONTINUE;
23
+ }
24
+
25
+ // Foreach callback for st_foreach_with_replace (iteration logic)
26
+ static int Memory_Profiler_Allocations_object_states_foreach(st_data_t key, st_data_t value, st_data_t argp, int error) {
27
+ // Return ST_REPLACE to trigger the replace callback for each entry
28
+ return ST_REPLACE;
29
+ }
30
+
31
+ // Replace callback for st_foreach_with_replace to update object_states keys and values during compaction
32
+ static int Memory_Profiler_Allocations_object_states_compact(st_data_t *key, st_data_t *value, st_data_t data, int existing) {
33
+ VALUE old_object = (VALUE)*key;
34
+ VALUE old_state = (VALUE)*value;
35
+
36
+ VALUE new_object = rb_gc_location(old_object);
37
+ VALUE new_state = rb_gc_location(old_state);
38
+
39
+ // Update key if it moved
40
+ if (old_object != new_object) {
41
+ *key = (st_data_t)new_object;
42
+ }
43
+
44
+ // Update value if it moved
45
+ if (old_state != new_state) {
46
+ *value = (st_data_t)new_state;
47
+ }
48
+
49
+ return ST_CONTINUE;
50
+ }
51
+
52
+ // GC mark function for Allocations
53
+ static void Memory_Profiler_Allocations_mark(void *ptr) {
54
+ struct Memory_Profiler_Capture_Allocations *record = ptr;
55
+
56
+ if (!record) {
57
+ return;
58
+ }
59
+
60
+ if (!NIL_P(record->callback)) {
61
+ rb_gc_mark_movable(record->callback);
62
+ }
63
+
64
+ // Mark object_states table if it exists
65
+ if (record->object_states) {
66
+ st_foreach(record->object_states, Memory_Profiler_Allocations_object_states_mark, 0);
67
+ }
68
+ }
69
+
70
+ // GC free function for Allocations
71
+ static void Memory_Profiler_Allocations_free(void *ptr) {
72
+ struct Memory_Profiler_Capture_Allocations *record = ptr;
73
+
74
+ if (record->object_states) {
75
+ st_free_table(record->object_states);
76
+ }
77
+
78
+ xfree(record);
79
+ }
80
+
81
+ // GC compact function for Allocations
82
+ static void Memory_Profiler_Allocations_compact(void *ptr) {
83
+ struct Memory_Profiler_Capture_Allocations *record = ptr;
84
+
85
+ // Update callback if it moved
86
+ if (!NIL_P(record->callback)) {
87
+ record->callback = rb_gc_location(record->callback);
88
+ }
89
+
90
+ // Update object_states table if it exists
91
+ if (record->object_states && record->object_states->num_entries > 0) {
92
+ if (st_foreach_with_replace(record->object_states, Memory_Profiler_Allocations_object_states_foreach, Memory_Profiler_Allocations_object_states_compact, 0)) {
93
+ rb_raise(rb_eRuntimeError, "object_states modified during GC compaction");
94
+ }
95
+ }
96
+ }
97
+
98
+ static const rb_data_type_t Memory_Profiler_Allocations_type = {
99
+ "Memory::Profiler::Allocations",
100
+ {
101
+ .dmark = Memory_Profiler_Allocations_mark,
102
+ .dcompact = Memory_Profiler_Allocations_compact,
103
+ .dfree = Memory_Profiler_Allocations_free,
104
+ },
105
+ 0, 0, RUBY_TYPED_FREE_IMMEDIATELY | RUBY_TYPED_WB_PROTECTED
106
+ };
107
+
108
+ // Wrap an allocations record
109
+ VALUE Memory_Profiler_Allocations_wrap(struct Memory_Profiler_Capture_Allocations *record) {
110
+ return TypedData_Wrap_Struct(Memory_Profiler_Allocations, &Memory_Profiler_Allocations_type, record);
111
+ }
112
+
113
+ // Get allocations record from wrapper
114
+ struct Memory_Profiler_Capture_Allocations* Memory_Profiler_Allocations_get(VALUE self) {
115
+ struct Memory_Profiler_Capture_Allocations *record;
116
+ TypedData_Get_Struct(self, struct Memory_Profiler_Capture_Allocations, &Memory_Profiler_Allocations_type, record);
117
+ return record;
118
+ }
119
+
120
+ // Allocations#new_count
121
+ static VALUE Memory_Profiler_Allocations_new_count(VALUE self) {
122
+ struct Memory_Profiler_Capture_Allocations *record = Memory_Profiler_Allocations_get(self);
123
+ return SIZET2NUM(record->new_count);
124
+ }
125
+
126
+ // Allocations#free_count
127
+ static VALUE Memory_Profiler_Allocations_free_count(VALUE self) {
128
+ struct Memory_Profiler_Capture_Allocations *record = Memory_Profiler_Allocations_get(self);
129
+ return SIZET2NUM(record->free_count);
130
+ }
131
+
132
+ // Allocations#retained_count
133
+ static VALUE Memory_Profiler_Allocations_retained_count(VALUE self) {
134
+ struct Memory_Profiler_Capture_Allocations *record = Memory_Profiler_Allocations_get(self);
135
+ // Handle underflow when free_count > new_count
136
+ size_t retained = record->free_count > record->new_count ? 0 : record->new_count - record->free_count;
137
+ return SIZET2NUM(retained);
138
+ }
139
+
140
+ // Allocations#track { |klass| ... }
141
+ static VALUE Memory_Profiler_Allocations_track(int argc, VALUE *argv, VALUE self) {
142
+ struct Memory_Profiler_Capture_Allocations *record = Memory_Profiler_Allocations_get(self);
143
+
144
+ VALUE callback;
145
+ rb_scan_args(argc, argv, "&", &callback);
146
+
147
+ // Use write barrier - self (Allocations wrapper) keeps Capture alive, which keeps callback alive
148
+ RB_OBJ_WRITE(self, &record->callback, callback);
149
+
150
+ return self;
151
+ }
152
+
153
+ // Clear/reset allocation counts and state for a record
154
+ void Memory_Profiler_Allocations_clear(VALUE allocations) {
155
+ struct Memory_Profiler_Capture_Allocations *record = Memory_Profiler_Allocations_get(allocations);
156
+ record->new_count = 0; // Reset allocation count
157
+ record->free_count = 0; // Reset free count
158
+ RB_OBJ_WRITE(allocations, &record->callback, Qnil); // Clear callback with write barrier
159
+
160
+ // Clear object states
161
+ if (record->object_states) {
162
+ st_free_table(record->object_states);
163
+ record->object_states = NULL;
164
+ }
165
+ }
166
+
167
+ void Init_Memory_Profiler_Allocations(VALUE Memory_Profiler)
168
+ {
169
+ // Allocations class - wraps allocation data for a specific class
170
+ Memory_Profiler_Allocations = rb_define_class_under(Memory_Profiler, "Allocations", rb_cObject);
171
+
172
+ // Allocations objects are only created internally via wrap, never from Ruby:
173
+ rb_undef_alloc_func(Memory_Profiler_Allocations);
174
+
175
+ rb_define_method(Memory_Profiler_Allocations, "new_count", Memory_Profiler_Allocations_new_count, 0);
176
+ rb_define_method(Memory_Profiler_Allocations, "free_count", Memory_Profiler_Allocations_free_count, 0);
177
+ rb_define_method(Memory_Profiler_Allocations, "retained_count", Memory_Profiler_Allocations_retained_count, 0);
178
+ rb_define_method(Memory_Profiler_Allocations, "track", Memory_Profiler_Allocations_track, -1); // -1 to accept block
179
+ }
@@ -0,0 +1,31 @@
1
+ // Released under the MIT License.
2
+ // Copyright, 2025, by Samuel Williams.
3
+
4
+ #pragma once
5
+
6
+ #include "ruby.h"
7
+ #include "ruby/st.h"
8
+
9
+ // Per-class allocation tracking record
10
+ struct Memory_Profiler_Capture_Allocations {
11
+ VALUE callback; // Optional Ruby proc/lambda to call on allocation
12
+ size_t new_count; // Total allocations seen since tracking started
13
+ size_t free_count; // Total frees seen since tracking started
14
+ // Live count = new_count - free_count
15
+
16
+ // For detailed tracking: map object (VALUE) => state (VALUE)
17
+ // State is returned from callback on :newobj and passed back on :freeobj
18
+ st_table *object_states;
19
+ };
20
+
21
+ // Wrap an allocations record in a VALUE
22
+ VALUE Memory_Profiler_Allocations_wrap(struct Memory_Profiler_Capture_Allocations *record);
23
+
24
+ // Get allocations record from wrapper VALUE
25
+ struct Memory_Profiler_Capture_Allocations* Memory_Profiler_Allocations_get(VALUE self);
26
+
27
+ // Clear/reset allocation counts and state for a record
28
+ void Memory_Profiler_Allocations_clear(VALUE allocations);
29
+
30
+ // Initialize the Allocations class
31
+ void Init_Memory_Profiler_Allocations(VALUE Memory_Profiler);