carson 3.27.0 → 3.27.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: e71c18b520bc6869f6f22cff8593d59b7e4d38a58b7c9f7648d75252fdd8691c
4
- data.tar.gz: 468b1b132620b3844ccd5df01907c149ddce5bf987bf2aa4fee6a17f3301cea7
3
+ metadata.gz: 1551ba09ad6dd7d0d2f0341f75e764d41334085b8e27015d7dc4d4e09007a25a
4
+ data.tar.gz: 9abc7ea81d920afd602f444c534731e8b5b3cf2b8762da40a9ef411d9d984845
5
5
  SHA512:
6
- metadata.gz: 6a3842a3fdb48ede44c4def41b1533ae43f68b5267a8d1280f1cdf7685b161406f58365002e20b34bb4dfa5a9fc2e788a970cf6c406dd877087844197114148d
7
- data.tar.gz: 8ee70fb5f5f06bfc417ae39634f9277a07ca5aaa867a57b14960987a12cbe66a44ea193fc167831645025cd573d599f8cf385e8849c2a40d94ff0c6c43eb5967
6
+ metadata.gz: 402d821608dcb4b5a448b3c6a1d9b7cf3de3415b6ab87323e035623f118d0a91e91ce50405bb25d4d608324682eff6ccdddfd7e4c13319b8e339dcc46e699dd0
7
+ data.tar.gz: 184128f03321ef5e59c6168a1fcea4c9a223f7d040aaeafa61661de7c0fad77fbd79549d143021c90db25cb7217d53c29eb18da46882744eaf0d85a705604eb8
data/API.md CHANGED
@@ -157,7 +157,7 @@ Environment overrides:
157
157
  - `agent.codex` / `agent.claude`: provider-specific options (reserved).
158
158
  - `check_wait`: seconds to wait for CI checks before classifying (default: `30`).
159
159
  - `merge.method`: `"squash"` only in governed mode.
160
- - `state_path`: JSON file path for active deliveries and revisions.
160
+ - `state_path`: JSON ledger path for active deliveries and revisions. Legacy SQLite ledgers are imported automatically on first read; explicit legacy `.sqlite3` paths keep working after import.
161
161
 
162
162
  `template` schema:
163
163
 
data/MANUAL.md CHANGED
@@ -66,12 +66,12 @@ on:
66
66
 
67
67
  jobs:
68
68
  governance:
69
- uses: wanghailei/carson/.github/workflows/carson_policy.yml@v1.0.0
69
+ uses: wanghailei/carson/.github/workflows/carson_policy.yml@v3.27.1
70
70
  secrets:
71
71
  CARSON_READ_TOKEN: ${{ secrets.CARSON_READ_TOKEN }}
72
72
  with:
73
- carson_ref: "v1.0.0"
74
- carson_version: "1.0.0"
73
+ carson_ref: "v3.27.1"
74
+ carson_version: "3.27.1"
75
75
  rubocop_version: "1.81.0"
76
76
  ```
77
77
 
data/RELEASE.md CHANGED
@@ -5,6 +5,18 @@ Release-note scope rule:
5
5
  - `RELEASE.md` records only version deltas, breaking changes, and migration actions.
6
6
  - Operational usage guides live in `MANUAL.md` and `API.md`.
7
7
 
8
+ ## 3.27.1
9
+
10
+ ### What changed
11
+
12
+ - **JSON ledger preserves legacy state and FIFO ordering** — legacy SQLite ledgers now import automatically into the JSON store on first read, same branch/head deliveries collapse across repo-path aliases, and queue order stays first-in-first-out even when multiple deliveries share the same timestamp.
13
+ - **sqlite3 support restored for migration** — `carson.gemspec` depends on `sqlite3` again so Carson can import legacy SQLite ledgers, and CI installs the gem before running unit tests.
14
+ - **Script Ruby guards now follow the gem contract** — `script/ci_smoke.sh` and `script/install_global_carson.sh` read the minimum supported Ruby version from `carson.gemspec` instead of hard-coding it, so smoke checks and installer behaviour stay aligned with the published gem.
15
+
16
+ ### Migration
17
+
18
+ - Existing SQLite ledgers are imported automatically on first read. Explicit legacy `.sqlite3` paths keep working after import.
19
+
8
20
  ## 3.27.0
9
21
 
10
22
  ### What changed
@@ -27,7 +39,9 @@ Release-note scope rule:
27
39
  - **Govern now finds worktree-created deliveries** — `repository_record` stored the worktree CWD as `repo_path` in the ledger, but `govern` looked up deliveries by the main tree path from config. The SQL query never matched, so `carson govern` always reported "no active deliveries" for worktree-created PRs. Now `repository_record` uses `main_worktree_root` so the ledger key is always the canonical main tree path. Also fixed the govern fallback path, `reconcile_delivery!`, `housekeep_repo!`, and `review_evidence` for the same mismatch.
28
40
  - **Status shows canonical repository name** — `carson status` from a worktree displayed the worktree folder name (e.g. `feature-branch`) as `Repository:` instead of the actual repository name. Now correctly shows the canonical name.
29
41
 
30
- ### No migration required
42
+ ### Migration note
43
+
44
+ - Existing SQLite ledgers are imported automatically on first read. No manual cleanup is required before upgrading.
31
45
 
32
46
  ## 3.23.3
33
47
 
data/VERSION CHANGED
@@ -1 +1 @@
1
- 3.27.0
1
+ 3.27.1
data/carson.gemspec CHANGED
@@ -28,6 +28,7 @@ Gem::Specification.new do |spec|
28
28
  spec.bindir = "exe"
29
29
  spec.executables = [ "carson" ]
30
30
  spec.require_paths = [ "lib" ]
31
+ spec.add_dependency "sqlite3", ">= 1.3", "< 3"
31
32
  spec.files = Dir.glob( "{lib,exe,templates,hooks}/**/*", File::FNM_DOTMATCH ).select { |path| File.file?( path ) } + [
32
33
  ".github/workflows/carson_policy.yml",
33
34
  "README.md",
data/lib/carson/ledger.rb CHANGED
@@ -7,10 +7,12 @@ module Carson
7
7
  class Ledger
8
8
  UNSET = Object.new
9
9
  ACTIVE_DELIVERY_STATES = Delivery::ACTIVE_STATES
10
+ SQLITE_HEADER = "SQLite format 3\0".b.freeze
10
11
 
11
12
  def initialize( path: )
12
13
  @path = File.expand_path( path )
13
14
  FileUtils.mkdir_p( File.dirname( @path ) )
15
+ migrate_legacy_state_if_needed!
14
16
  end
15
17
 
16
18
  attr_reader :path
@@ -20,23 +22,22 @@ module Carson
20
22
  timestamp = now_utc
21
23
 
22
24
  with_state do |state|
25
+ repo_paths = repo_identity_paths( repo_path: repository.path )
26
+ matches = matching_deliveries(
27
+ state: state,
28
+ repo_paths: repo_paths,
29
+ branch_name: branch_name,
30
+ head: head
31
+ )
23
32
  key = delivery_key( repo_path: repository.path, branch_name: branch_name, head: head )
24
- existing = state[ "deliveries" ][ key ]
25
-
26
- if existing
27
- existing[ "repo_path" ] = repository.path
28
- existing[ "worktree_path" ] = worktree_path
29
- existing[ "status" ] = status
30
- existing[ "pr_number" ] = pr_number
31
- existing[ "pr_url" ] = pr_url
32
- existing[ "cause" ] = cause
33
- existing[ "summary" ] = summary
34
- existing[ "updated_at" ] = timestamp
35
- return build_delivery( key: key, data: existing, repository: repository )
36
- end
33
+ sequence = matches.map { |_existing_key, data| delivery_sequence( data: data ) }.compact.min
34
+ created_at = matches.map { |_existing_key, data| data.fetch( "created_at", "" ).to_s }.reject( &:empty? ).min || timestamp
35
+ revisions = merged_revisions( entries: matches )
36
+ matches.each { |existing_key, _data| state[ "deliveries" ].delete( existing_key ) }
37
37
 
38
38
  supersede_branch!( state: state, repo_path: repository.path, branch_name: branch_name, timestamp: timestamp )
39
39
  state[ "deliveries" ][ key ] = {
40
+ "sequence" => sequence || next_delivery_sequence!( state: state ),
40
41
  "repo_path" => repository.path,
41
42
  "branch_name" => branch_name,
42
43
  "head" => head,
@@ -46,11 +47,11 @@ module Carson
46
47
  "pr_url" => pr_url,
47
48
  "cause" => cause,
48
49
  "summary" => summary,
49
- "created_at" => timestamp,
50
+ "created_at" => created_at,
50
51
  "updated_at" => timestamp,
51
52
  "integrated_at" => nil,
52
53
  "superseded_at" => nil,
53
- "revisions" => []
54
+ "revisions" => revisions
54
55
  }
55
56
  build_delivery( key: key, data: state[ "deliveries" ][ key ], repository: repository )
56
57
  end
@@ -69,7 +70,7 @@ module Carson
69
70
 
70
71
  return nil if candidates.empty?
71
72
 
72
- key, data = candidates.max_by { |k, d| [ d[ "updated_at" ].to_s, k ] }
73
+ key, data = candidates.max_by { |k, d| [ d[ "updated_at" ].to_s, delivery_sequence( data: d ), k ] }
73
74
  build_delivery( key: key, data: data )
74
75
  end
75
76
 
@@ -80,7 +81,7 @@ module Carson
80
81
 
81
82
  state[ "deliveries" ]
82
83
  .select { |_key, data| repo_paths.include?( data[ "repo_path" ] ) && ACTIVE_DELIVERY_STATES.include?( data[ "status" ] ) }
83
- .sort_by { |key, data| [ data[ "created_at" ].to_s, key ] }
84
+ .sort_by { |key, data| [ delivery_sequence( data: data ), key ] }
84
85
  .map { |key, data| build_delivery( key: key, data: data ) }
85
86
  end
86
87
 
@@ -95,7 +96,7 @@ module Carson
95
96
  data[ "status" ] == "integrated" &&
96
97
  !data[ "worktree_path" ].to_s.strip.empty?
97
98
  end
98
- .sort_by { |key, data| [ data[ "integrated_at" ].to_s, data[ "updated_at" ].to_s, key ] }
99
+ .sort_by { |key, data| [ data[ "integrated_at" ].to_s, delivery_sequence( data: data ), key ] }
99
100
  .map { |key, data| build_delivery( key: key, data: data ) }
100
101
  end
101
102
 
@@ -112,8 +113,7 @@ module Carson
112
113
  superseded_at: UNSET
113
114
  )
114
115
  with_state do |state|
115
- data = state[ "deliveries" ][ delivery.key ]
116
- raise "delivery not found: #{delivery.key}" unless data
116
+ key, data = resolve_delivery_entry( state: state, delivery: delivery )
117
117
 
118
118
  data[ "status" ] = status unless status.equal?( UNSET )
119
119
  data[ "pr_number" ] = pr_number unless pr_number.equal?( UNSET )
@@ -125,7 +125,7 @@ module Carson
125
125
  data[ "superseded_at" ] = superseded_at unless superseded_at.equal?( UNSET )
126
126
  data[ "updated_at" ] = now_utc
127
127
 
128
- build_delivery( key: delivery.key, data: data, repository: delivery.repository )
128
+ build_delivery( key: key, data: data, repository: delivery.repository )
129
129
  end
130
130
  end
131
131
 
@@ -134,8 +134,7 @@ module Carson
134
134
  timestamp = now_utc
135
135
 
136
136
  with_state do |state|
137
- data = state[ "deliveries" ][ delivery.key ]
138
- raise "delivery not found: #{delivery.key}" unless data
137
+ _key, data = resolve_delivery_entry( state: state, delivery: delivery )
139
138
 
140
139
  revisions = data[ "revisions" ] ||= []
141
140
  next_number = ( revisions.map { |r| r[ "number" ].to_i }.max || 0 ) + 1
@@ -166,11 +165,7 @@ module Carson
166
165
 
167
166
  # Acquires file lock, loads state, yields for mutation, saves atomically, releases lock.
168
167
  def with_state
169
- lock_path = "#{path}.lock"
170
- FileUtils.mkdir_p( File.dirname( lock_path ) )
171
- FileUtils.touch( lock_path )
172
-
173
- File.open( lock_path, File::RDWR | File::CREAT ) do |lock_file|
168
+ with_state_lock do |lock_file|
174
169
  lock_file.flock( File::LOCK_EX )
175
170
  state = load_state
176
171
  result = yield state
@@ -180,21 +175,22 @@ module Carson
180
175
  end
181
176
 
182
177
  def load_state
183
- return { "deliveries" => {}, "recovery_events" => [] } unless File.exist?( path )
178
+ return { "deliveries" => {}, "recovery_events" => [] } unless File.exist?( path )
184
179
 
185
- raw = File.read( path )
180
+ raw = File.binread( path )
186
181
  return { "deliveries" => {}, "recovery_events" => [] } if raw.strip.empty?
187
182
 
188
183
  parsed = JSON.parse( raw )
189
184
  raise "state file must contain a JSON object at #{path}" unless parsed.is_a?( Hash )
190
185
  parsed[ "deliveries" ] ||= {}
191
- parsed[ "recovery_events" ] ||= []
186
+ normalise_state!( state: parsed )
192
187
  parsed
193
- rescue JSON::ParserError => exception
188
+ rescue JSON::ParserError, Encoding::InvalidByteSequenceError, Encoding::UndefinedConversionError => exception
194
189
  raise "invalid JSON in state file #{path}: #{exception.message}"
195
190
  end
196
191
 
197
192
  def save_state!( state )
193
+ normalise_state!( state: state )
198
194
  tmp_path = "#{path}.tmp"
199
195
  File.write( tmp_path, JSON.pretty_generate( state ) + "\n" )
200
196
  File.rename( tmp_path, path )
@@ -242,6 +238,228 @@ module Carson
242
238
  )
243
239
  end
244
240
 
241
+ def migrate_legacy_state_if_needed!
242
+ with_state_lock do |lock_file|
243
+ lock_file.flock( File::LOCK_EX )
244
+ source_path = legacy_sqlite_source_path
245
+ next unless source_path
246
+
247
+ state = load_legacy_sqlite_state( path: source_path )
248
+ save_state!( state )
249
+ end
250
+ end
251
+
252
+ def with_state_lock
253
+ lock_path = "#{path}.lock"
254
+ FileUtils.mkdir_p( File.dirname( lock_path ) )
255
+ FileUtils.touch( lock_path )
256
+
257
+ File.open( lock_path, File::RDWR | File::CREAT ) do |lock_file|
258
+ yield lock_file
259
+ end
260
+ end
261
+
262
+ def legacy_sqlite_source_path
263
+ return nil unless state_path_requires_migration?
264
+ return path if sqlite_database_file?( path: path )
265
+
266
+ legacy_path = legacy_state_path
267
+ return nil unless legacy_path
268
+ return legacy_path if sqlite_database_file?( path: legacy_path )
269
+
270
+ nil
271
+ end
272
+
273
+ def state_path_requires_migration?
274
+ return true if sqlite_database_file?( path: path )
275
+ return false if File.exist?( path )
276
+ !legacy_state_path.nil?
277
+ end
278
+
279
+ def legacy_state_path
280
+ return nil unless path.end_with?( ".json" )
281
+ path.sub( /\.json\z/, ".sqlite3" )
282
+ end
283
+
284
+ def sqlite_database_file?( path: )
285
+ return false unless File.file?( path )
286
+ File.binread( path, SQLITE_HEADER.bytesize ) == SQLITE_HEADER
287
+ rescue StandardError
288
+ false
289
+ end
290
+
291
+ def load_legacy_sqlite_state( path: )
292
+ begin
293
+ require "sqlite3"
294
+ rescue LoadError => exception
295
+ raise "legacy SQLite ledger found at #{path}, but sqlite3 support is unavailable: #{exception.message}"
296
+ end
297
+
298
+ database = open_legacy_sqlite_database( path: path )
299
+ deliveries = database.execute( "SELECT * FROM deliveries ORDER BY id ASC" )
300
+ revisions_by_delivery = database.execute(
301
+ "SELECT * FROM revisions ORDER BY delivery_id ASC, number ASC, id ASC"
302
+ ).group_by { |row| row.fetch( "delivery_id" ) }
303
+
304
+ state = {
305
+ "deliveries" => {},
306
+ "recovery_events" => [],
307
+ "next_sequence" => 1
308
+ }
309
+ deliveries.each do |row|
310
+ key = delivery_key(
311
+ repo_path: row.fetch( "repo_path" ),
312
+ branch_name: row.fetch( "branch_name" ),
313
+ head: row.fetch( "head" )
314
+ )
315
+ state[ "deliveries" ][ key ] = {
316
+ "sequence" => row.fetch( "id" ).to_i,
317
+ "repo_path" => row.fetch( "repo_path" ),
318
+ "branch_name" => row.fetch( "branch_name" ),
319
+ "head" => row.fetch( "head" ),
320
+ "worktree_path" => row.fetch( "worktree_path" ),
321
+ "status" => row.fetch( "status" ),
322
+ "pr_number" => row.fetch( "pr_number" ),
323
+ "pr_url" => row.fetch( "pr_url" ),
324
+ "cause" => row.fetch( "cause" ),
325
+ "summary" => row.fetch( "summary" ),
326
+ "created_at" => row.fetch( "created_at" ),
327
+ "updated_at" => row.fetch( "updated_at" ),
328
+ "integrated_at" => row.fetch( "integrated_at" ),
329
+ "superseded_at" => row.fetch( "superseded_at" ),
330
+ "revisions" => Array( revisions_by_delivery[ row.fetch( "id" ) ] ).map do |revision|
331
+ {
332
+ "number" => revision.fetch( "number" ).to_i,
333
+ "cause" => revision.fetch( "cause" ),
334
+ "provider" => revision.fetch( "provider" ),
335
+ "status" => revision.fetch( "status" ),
336
+ "started_at" => revision.fetch( "started_at" ),
337
+ "finished_at" => revision.fetch( "finished_at" ),
338
+ "summary" => revision.fetch( "summary" )
339
+ }
340
+ end
341
+ }
342
+ end
343
+ normalise_state!( state: state )
344
+ state
345
+ ensure
346
+ database&.close
347
+ end
348
+
349
+ def open_legacy_sqlite_database( path: )
350
+ database = SQLite3::Database.new( "file:#{path}?immutable=1", readonly: true, uri: true )
351
+ database.results_as_hash = true
352
+ database.busy_timeout = 5_000
353
+ database
354
+ rescue SQLite3::CantOpenException
355
+ database&.close
356
+ database = SQLite3::Database.new( path, readonly: true )
357
+ database.results_as_hash = true
358
+ database.busy_timeout = 5_000
359
+ database
360
+ end
361
+
362
+ def normalise_state!( state: )
363
+ deliveries = state[ "deliveries" ]
364
+ raise "state file must contain a JSON object at #{path}" unless deliveries.is_a?( Hash )
365
+ state[ "recovery_events" ] = Array( state[ "recovery_events" ] )
366
+
367
+ sequence_counts = Hash.new( 0 )
368
+ deliveries.each_value do |data|
369
+ data[ "revisions" ] = Array( data[ "revisions" ] )
370
+ sequence = integer_or_nil( value: data[ "sequence" ] )
371
+ sequence_counts[ sequence ] += 1 unless sequence.nil? || sequence <= 0
372
+ end
373
+
374
+ max_sequence = sequence_counts.keys.max.to_i
375
+ next_sequence = max_sequence + 1
376
+ deliveries.keys.sort_by { |key| [ deliveries.fetch( key ).fetch( "created_at", "" ).to_s, key ] }.each do |key|
377
+ data = deliveries.fetch( key )
378
+ sequence = integer_or_nil( value: data[ "sequence" ] )
379
+ if sequence.nil? || sequence <= 0 || sequence_counts[ sequence ] > 1
380
+ sequence = next_sequence
381
+ next_sequence += 1
382
+ end
383
+ data[ "sequence" ] = sequence
384
+ end
385
+
386
+ recorded_next = integer_or_nil( value: state[ "next_sequence" ] ) || 1
387
+ state[ "next_sequence" ] = [ recorded_next, next_sequence ].max
388
+ end
389
+
390
+ def next_delivery_sequence!( state: )
391
+ sequence = integer_or_nil( value: state[ "next_sequence" ] ) || 1
392
+ state[ "next_sequence" ] = sequence + 1
393
+ sequence
394
+ end
395
+
396
+ def integer_or_nil( value: )
397
+ Integer( value )
398
+ rescue ArgumentError, TypeError
399
+ nil
400
+ end
401
+
402
+ def delivery_sequence( data: )
403
+ integer_or_nil( value: data[ "sequence" ] ) || 0
404
+ end
405
+
406
+ def matching_deliveries( state:, repo_paths:, branch_name:, head: UNSET )
407
+ state[ "deliveries" ].select do |_key, data|
408
+ next false unless repo_paths.include?( data[ "repo_path" ] )
409
+ next false unless data[ "branch_name" ] == branch_name
410
+ next false unless head.equal?( UNSET ) || data[ "head" ] == head
411
+
412
+ true
413
+ end
414
+ end
415
+
416
+ def resolve_delivery_entry( state:, delivery: )
417
+ data = state[ "deliveries" ][ delivery.key ]
418
+ return [ delivery.key, data ] if data
419
+
420
+ repo_paths = repo_identity_paths( repo_path: delivery.repo_path )
421
+ match = matching_deliveries(
422
+ state: state,
423
+ repo_paths: repo_paths,
424
+ branch_name: delivery.branch,
425
+ head: delivery.head
426
+ ).max_by { |key, row| [ row[ "updated_at" ].to_s, delivery_sequence( data: row ), key ] }
427
+ raise "delivery not found: #{delivery.key}" unless match
428
+
429
+ match
430
+ end
431
+
432
+ def merged_revisions( entries: )
433
+ entries
434
+ .flat_map { |_key, data| Array( data[ "revisions" ] ) }
435
+ .map do |revision|
436
+ {
437
+ "number" => revision.fetch( "number", 0 ).to_i,
438
+ "cause" => revision[ "cause" ],
439
+ "provider" => revision[ "provider" ],
440
+ "status" => revision[ "status" ],
441
+ "started_at" => revision[ "started_at" ],
442
+ "finished_at" => revision[ "finished_at" ],
443
+ "summary" => revision[ "summary" ]
444
+ }
445
+ end
446
+ .uniq do |revision|
447
+ [
448
+ revision[ "cause" ],
449
+ revision[ "provider" ],
450
+ revision[ "status" ],
451
+ revision[ "started_at" ],
452
+ revision[ "finished_at" ],
453
+ revision[ "summary" ]
454
+ ]
455
+ end
456
+ .sort_by { |revision| [ revision.fetch( "started_at", "" ).to_s, revision.fetch( "number", 0 ).to_i ] }
457
+ .each_with_index
458
+ .map do |revision, index|
459
+ revision.merge( "number" => index + 1 )
460
+ end
461
+ end
462
+
245
463
  def supersede_branch!( state:, repo_path:, branch_name:, timestamp: )
246
464
  repo_paths = repo_identity_paths( repo_path: repo_path )
247
465
  state[ "deliveries" ].each do |_key, data|
@@ -294,7 +512,7 @@ module Carson
294
512
  end
295
513
 
296
514
  def now_utc
297
- Time.now.utc.iso8601
515
+ Time.now.utc.iso8601( 6 )
298
516
  end
299
517
 
300
518
  def record_recovery_event( repository:, branch_name:, pr_number:, pr_url:, check_name:, default_branch:, default_branch_sha:, pr_sha:, actor:, merge_method:, status:, summary: )
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: carson
3
3
  version: !ruby/object:Gem::Version
4
- version: 3.27.0
4
+ version: 3.27.1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Hailei Wang
@@ -10,7 +10,27 @@ authors:
10
10
  bindir: exe
11
11
  cert_chain: []
12
12
  date: 1980-01-02 00:00:00.000000000 Z
13
- dependencies: []
13
+ dependencies:
14
+ - !ruby/object:Gem::Dependency
15
+ name: sqlite3
16
+ requirement: !ruby/object:Gem::Requirement
17
+ requirements:
18
+ - - ">="
19
+ - !ruby/object:Gem::Version
20
+ version: '1.3'
21
+ - - "<"
22
+ - !ruby/object:Gem::Version
23
+ version: '3'
24
+ type: :runtime
25
+ prerelease: false
26
+ version_requirements: !ruby/object:Gem::Requirement
27
+ requirements:
28
+ - - ">="
29
+ - !ruby/object:Gem::Version
30
+ version: '1.3'
31
+ - - "<"
32
+ - !ruby/object:Gem::Version
33
+ version: '3'
14
34
  description: 'Carson is an autonomous git strategist and repositories governor that
15
35
  lives outside the repositories it governs — no Carson-owned artefacts in your repo.
16
36
  As strategist, Carson knows when to branch, how to isolate concurrent work, and