data_porter 0.1.0 → 0.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (113) hide show
  1. checksums.yaml +4 -4
  2. data/CHANGELOG.md +13 -1
  3. data/README.md +15 -5
  4. data/ROADMAP.md +71 -0
  5. data/app/models/data_porter/data_import.rb +1 -1
  6. data/app/views/data_porter/imports/index.html.erb +1 -1
  7. data/app/views/data_porter/imports/new.html.erb +1 -1
  8. data/lib/data_porter/configuration.rb +1 -1
  9. data/lib/data_porter/sources/xlsx.rb +68 -0
  10. data/lib/data_porter/sources.rb +3 -1
  11. data/lib/data_porter/version.rb +1 -1
  12. metadata +19 -104
  13. data/.claude/commands/blog-status.md +0 -10
  14. data/.claude/commands/blog.md +0 -109
  15. data/.claude/commands/task-done.md +0 -27
  16. data/.claude/commands/tm/add-dependency.md +0 -58
  17. data/.claude/commands/tm/add-subtask.md +0 -79
  18. data/.claude/commands/tm/add-task.md +0 -81
  19. data/.claude/commands/tm/analyze-complexity.md +0 -124
  20. data/.claude/commands/tm/analyze-project.md +0 -100
  21. data/.claude/commands/tm/auto-implement-tasks.md +0 -100
  22. data/.claude/commands/tm/command-pipeline.md +0 -80
  23. data/.claude/commands/tm/complexity-report.md +0 -120
  24. data/.claude/commands/tm/convert-task-to-subtask.md +0 -74
  25. data/.claude/commands/tm/expand-all-tasks.md +0 -52
  26. data/.claude/commands/tm/expand-task.md +0 -52
  27. data/.claude/commands/tm/fix-dependencies.md +0 -82
  28. data/.claude/commands/tm/help.md +0 -101
  29. data/.claude/commands/tm/init-project-quick.md +0 -49
  30. data/.claude/commands/tm/init-project.md +0 -53
  31. data/.claude/commands/tm/install-taskmaster.md +0 -118
  32. data/.claude/commands/tm/learn.md +0 -106
  33. data/.claude/commands/tm/list-tasks-by-status.md +0 -42
  34. data/.claude/commands/tm/list-tasks-with-subtasks.md +0 -30
  35. data/.claude/commands/tm/list-tasks.md +0 -46
  36. data/.claude/commands/tm/next-task.md +0 -69
  37. data/.claude/commands/tm/parse-prd-with-research.md +0 -51
  38. data/.claude/commands/tm/parse-prd.md +0 -52
  39. data/.claude/commands/tm/project-status.md +0 -67
  40. data/.claude/commands/tm/quick-install-taskmaster.md +0 -23
  41. data/.claude/commands/tm/remove-all-subtasks.md +0 -94
  42. data/.claude/commands/tm/remove-dependency.md +0 -65
  43. data/.claude/commands/tm/remove-subtask.md +0 -87
  44. data/.claude/commands/tm/remove-subtasks.md +0 -89
  45. data/.claude/commands/tm/remove-task.md +0 -110
  46. data/.claude/commands/tm/setup-models.md +0 -52
  47. data/.claude/commands/tm/show-task.md +0 -85
  48. data/.claude/commands/tm/smart-workflow.md +0 -58
  49. data/.claude/commands/tm/sync-readme.md +0 -120
  50. data/.claude/commands/tm/tm-main.md +0 -147
  51. data/.claude/commands/tm/to-cancelled.md +0 -58
  52. data/.claude/commands/tm/to-deferred.md +0 -50
  53. data/.claude/commands/tm/to-done.md +0 -47
  54. data/.claude/commands/tm/to-in-progress.md +0 -39
  55. data/.claude/commands/tm/to-pending.md +0 -35
  56. data/.claude/commands/tm/to-review.md +0 -43
  57. data/.claude/commands/tm/update-single-task.md +0 -122
  58. data/.claude/commands/tm/update-task.md +0 -75
  59. data/.claude/commands/tm/update-tasks-from-id.md +0 -111
  60. data/.claude/commands/tm/validate-dependencies.md +0 -72
  61. data/.claude/commands/tm/view-models.md +0 -52
  62. data/.env.example +0 -12
  63. data/.mcp.json +0 -24
  64. data/.taskmaster/CLAUDE.md +0 -435
  65. data/.taskmaster/config.json +0 -44
  66. data/.taskmaster/docs/prd.txt +0 -2044
  67. data/.taskmaster/state.json +0 -6
  68. data/.taskmaster/tasks/task_001.md +0 -19
  69. data/.taskmaster/tasks/task_002.md +0 -19
  70. data/.taskmaster/tasks/task_003.md +0 -19
  71. data/.taskmaster/tasks/task_004.md +0 -19
  72. data/.taskmaster/tasks/task_005.md +0 -19
  73. data/.taskmaster/tasks/task_006.md +0 -19
  74. data/.taskmaster/tasks/task_007.md +0 -19
  75. data/.taskmaster/tasks/task_008.md +0 -19
  76. data/.taskmaster/tasks/task_009.md +0 -19
  77. data/.taskmaster/tasks/task_010.md +0 -19
  78. data/.taskmaster/tasks/task_011.md +0 -19
  79. data/.taskmaster/tasks/task_012.md +0 -19
  80. data/.taskmaster/tasks/task_013.md +0 -19
  81. data/.taskmaster/tasks/task_014.md +0 -19
  82. data/.taskmaster/tasks/task_015.md +0 -19
  83. data/.taskmaster/tasks/task_016.md +0 -19
  84. data/.taskmaster/tasks/task_017.md +0 -19
  85. data/.taskmaster/tasks/task_018.md +0 -19
  86. data/.taskmaster/tasks/task_019.md +0 -19
  87. data/.taskmaster/tasks/task_020.md +0 -19
  88. data/.taskmaster/tasks/tasks.json +0 -299
  89. data/.taskmaster/templates/example_prd.txt +0 -47
  90. data/.taskmaster/templates/example_prd_rpg.txt +0 -511
  91. data/CLAUDE.md +0 -65
  92. data/config/database.yml +0 -3
  93. data/docs/SPEC.md +0 -2012
  94. data/docs/UI.md +0 -32
  95. data/docs/blog/001-why-build-a-data-import-engine.md +0 -166
  96. data/docs/blog/002-scaffolding-a-rails-engine.md +0 -188
  97. data/docs/blog/003-configuration-dsl.md +0 -222
  98. data/docs/blog/004-store-model-jsonb.md +0 -237
  99. data/docs/blog/005-target-dsl.md +0 -284
  100. data/docs/blog/006-parsing-csv-sources.md +0 -300
  101. data/docs/blog/007-orchestrator.md +0 -247
  102. data/docs/blog/008-actioncable-stimulus.md +0 -376
  103. data/docs/blog/009-phlex-ui-components.md +0 -446
  104. data/docs/blog/010-controllers-routing.md +0 -374
  105. data/docs/blog/011-generators.md +0 -364
  106. data/docs/blog/012-json-api-sources.md +0 -323
  107. data/docs/blog/013-testing-rails-engine.md +0 -618
  108. data/docs/blog/014-dry-run.md +0 -307
  109. data/docs/blog/015-publishing-retro.md +0 -264
  110. data/docs/blog/016-erb-view-templates.md +0 -431
  111. data/docs/blog/017-showcase-final-retro.md +0 -220
  112. data/docs/blog/BACKLOG.md +0 -8
  113. data/docs/blog/SERIES.md +0 -154
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: cfa4a171b2ca5f54f539d74f14abd600159aa8563924d8c83c9b7d4a00f5e1ff
4
- data.tar.gz: 3f0f50d55471aa7fc3b888a8b2472d29c93af7559702f6f3c84fb180aa40f5aa
3
+ metadata.gz: e95ffce9eec8e4473e77c4da1876da232dd95ade124bc204b867c7862d14938f
4
+ data.tar.gz: d4ef943174d8374bd272331b94014723d37bf2c444c0edc2a5faad132ba9859f
5
5
  SHA512:
6
- metadata.gz: 2624e1366152820ae9070e607f2ad88cb6ce4e65c4fedaeb193ec116c5155879a6ce7be5f55a2d5f7b20a8a96308cb0f5d7e57535bbabe4a726ed62e3374abac
7
- data.tar.gz: 2b03913652f0a6a47a62396174c5be712c3825b570b52b8c58baa715632d99b16d4e3d1bb8df50289f81ad15ce2389640ff87542ece3a2eeedf980273f6f4f69
6
+ metadata.gz: a3d9ca57d376046a35c8024ab1f1ed15416544737ff44d34b94d6ed1dd36c9c50693a93b80357bff90179e26c1467dbd01f3be872bcf241b8f3fd6eaeabaa49c
7
+ data.tar.gz: fe570eec3633168b17762849e2387aa7cd56a01bbf62668e7e823b84f333b64d980a7aa55e95732d7ee0c2b7c9f4e36549db10e3aa36b59720de2c7348c45ceb
data/CHANGELOG.md CHANGED
@@ -5,7 +5,19 @@ All notable changes to this project will be documented in this file.
5
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
6
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
 
8
- ## [Unreleased]
8
+ ## [0.2.0] - 2026-02-07
9
+
10
+ ### Added
11
+
12
+ - **XLSX source** -- Import Excel `.xlsx` files via `Sources::Xlsx`, powered by [creek](https://github.com/pythonicrubyist/creek) for streaming, memory-efficient parsing
13
+ - Sheet selection via `config["sheet_index"]` (defaults to first sheet)
14
+ - `creek` runtime dependency in gemspec
15
+
16
+ ### Changed
17
+
18
+ - Default `enabled_sources` now includes `:xlsx` (`%i[csv json api xlsx]`)
19
+ - Dropzone hints updated to mention XLSX in both index and new import views
20
+ - 225 RSpec examples (up from 221), 0 failures
9
21
 
10
22
  ## [0.1.0] - 2026-02-06
11
23
 
data/README.md CHANGED
@@ -2,7 +2,7 @@
2
2
 
3
3
  A mountable Rails engine for 3-step data import workflows: **Upload**, **Preview**, **Import**.
4
4
 
5
- Supports CSV, JSON, and API sources with a declarative DSL for defining import targets. Business-agnostic by design -- all domain logic lives in your host app.
5
+ Supports CSV, JSON, XLSX, and API sources with a declarative DSL for defining import targets. Business-agnostic by design -- all domain logic lives in your host app.
6
6
 
7
7
  ![Import list with status badges](docs/screenshots/index-with-previewing.jpg)
8
8
 
@@ -105,7 +105,7 @@ DataPorter.configure do |config|
105
105
  config.preview_limit = 500
106
106
 
107
107
  # Enabled source types.
108
- config.enabled_sources = %i[csv json api]
108
+ config.enabled_sources = %i[csv json api xlsx]
109
109
  end
110
110
  ```
111
111
 
@@ -117,7 +117,7 @@ end
117
117
  | `cable_channel_prefix` | `"data_porter"` | ActionCable stream prefix |
118
118
  | `context_builder` | `nil` | Lambda receiving the controller, returns context passed to target methods |
119
119
  | `preview_limit` | `500` | Max records shown in the preview step |
120
- | `enabled_sources` | `%i[csv json api]` | Source types available in the UI |
120
+ | `enabled_sources` | `%i[csv json api xlsx]` | Source types available in the UI |
121
121
 
122
122
  ## Defining Targets
123
123
 
@@ -130,7 +130,7 @@ class OrderTarget < DataPorter::Target
130
130
  label "Orders"
131
131
  model_name "Order"
132
132
  icon "fas fa-shopping-cart"
133
- sources :csv, :json, :api
133
+ sources :csv, :json, :api, :xlsx
134
134
 
135
135
  columns do
136
136
  column :order_number, type: :string, required: true
@@ -175,7 +175,7 @@ CSS icon class (e.g. FontAwesome) shown in the UI.
175
175
 
176
176
  #### `sources(*types)`
177
177
 
178
- Accepted source types: `:csv`, `:json`, `:api`.
178
+ Accepted source types: `:csv`, `:json`, `:api`, `:xlsx`.
179
179
 
180
180
  #### `columns { ... }`
181
181
 
@@ -295,6 +295,16 @@ end
295
295
 
296
296
  Upload a CSV file. Configure header mappings with `csv_mapping` when headers don't match your column names.
297
297
 
298
+ ### XLSX
299
+
300
+ Upload an Excel `.xlsx` file. Uses the same `csv_mapping` for header-to-column mapping. By default the first sheet is parsed; select a different sheet via config:
301
+
302
+ ```ruby
303
+ import.config = { "sheet_index" => 1 }
304
+ ```
305
+
306
+ Powered by [creek](https://github.com/pythonicrubyist/creek) for streaming, memory-efficient parsing.
307
+
298
308
  ### JSON
299
309
 
300
310
  Upload a JSON file. Use `json_root` to specify the path to the records array. Raw JSON arrays are supported without `json_root`.
data/ROADMAP.md ADDED
@@ -0,0 +1,71 @@
1
+ # Roadmap
2
+
3
+ ## v2.0 - Planned
4
+
5
+ ### High Priority
6
+
7
+ #### XLSX Source
8
+ - Parse `.xlsx` files natively (via `creek` or `roo` gem)
9
+ - Sheet selector when the file contains multiple sheets
10
+ - Same parsing pipeline as CSV (prerequisite for column mapping)
11
+
12
+ #### Interactive Column Mapping & Templates
13
+ - Mapping UI on the preview step: each CSV/XLSX column header gets a dropdown to select the target field
14
+ - Auto-suggest based on column name similarity
15
+ - Save mapping as a reusable template (name + column-to-field pairs)
16
+ - Template selector that pre-fills all dropdowns at once
17
+ - Stored per-target so each import type has its own template library
18
+
19
+ #### Export (reverse workflow)
20
+ - `ExportTarget` DSL mirroring the import Target
21
+ - Define query scope, columns, and output format (CSV, JSON, XLSX)
22
+ - Background job with progress bar (reuse existing ActionCable infrastructure)
23
+ - Download link on completion
24
+
25
+ #### Batch Import
26
+ - Process large files in configurable batches (default: 1,000 records)
27
+ - Use `insert_all` / `upsert_all` for bulk persistence
28
+ - Granular progress: "12,000 / 150,000 records"
29
+ - Memory-efficient streaming parser for CSV and XLSX
30
+
31
+ #### Scheduled Imports
32
+ - Cron-like configuration in the Target DSL: `schedule "0 3 * * *"`
33
+ - Recurring API source imports (fetch external data on a timer)
34
+ - Dashboard for scheduled imports with last run status and next run time
35
+ - Built on ActiveJob + `solid_queue` or host app's queue adapter
36
+
37
+ ### Medium Priority
38
+
39
+ #### Column Transformers
40
+ - Inline transform lambdas in the column DSL
41
+ - Built-in transformers: `downcase`, `strip`, `normalize_phone`, `parse_date`
42
+ ```ruby
43
+ column :email, type: :email, transform: ->(v) { v.downcase.strip }
44
+ ```
45
+
46
+ #### Diff Mode
47
+ - Compare incoming records with existing database data
48
+ - Show what will be created, updated, or left unchanged
49
+ - Visual diff on the preview step before confirming
50
+ - Supports `deduplicate_by` keys for record matching
51
+
52
+ #### Webhooks
53
+ - Notify an external URL on import completion or failure
54
+ - Configurable per-target or globally
55
+ - JSON payload with import summary and error details
56
+
57
+ #### Import API (REST)
58
+ - `POST /data_porter/api/imports` to trigger imports programmatically
59
+ - Accept file upload or source URL
60
+ - JSON response with import ID for status polling
61
+
62
+ ### Low Priority
63
+
64
+ #### Dashboard Analytics
65
+ - Stats: imports per week, error rate, average duration, top targets
66
+ - Lightweight charts (inline SVG, no JS dependency)
67
+
68
+ #### Rollback
69
+ - Undo a completed import (soft-delete created records)
70
+ - Uses `target_id` already tracked on each ImportRecord
71
+ - Confirmation step with summary of records to be reverted
@@ -24,7 +24,7 @@ module DataPorter
24
24
  attribute :config, :json, default: -> { {} }
25
25
 
26
26
  validates :target_key, presence: true
27
- validates :source_type, presence: true, inclusion: { in: %w[csv json api] }
27
+ validates :source_type, presence: true, inclusion: { in: %w[csv json api xlsx] }
28
28
 
29
29
  def target_class
30
30
  Registry.find(target_key)
@@ -77,7 +77,7 @@
77
77
  <div class="dp-dropzone__content">
78
78
  <div class="dp-dropzone__icon">&#128196;</div>
79
79
  <span class="dp-dropzone__text">Drop your file here or <strong>browse</strong></span>
80
- <span class="dp-dropzone__hint">CSV or JSON files accepted</span>
80
+ <span class="dp-dropzone__hint">CSV, JSON, or XLSX files accepted</span>
81
81
  </div>
82
82
  <div class="dp-dropzone__selected" id="dp-file-name" style="display: none;"></div>
83
83
  </label>
@@ -30,7 +30,7 @@
30
30
  <div class="dp-dropzone__content">
31
31
  <div class="dp-dropzone__icon">&#128196;</div>
32
32
  <span class="dp-dropzone__text">Drop your file here or <strong>browse</strong></span>
33
- <span class="dp-dropzone__hint">CSV or JSON files accepted</span>
33
+ <span class="dp-dropzone__hint">CSV, JSON, or XLSX files accepted</span>
34
34
  </div>
35
35
  <div class="dp-dropzone__selected" id="dp-file-name-new" style="display: none;"></div>
36
36
  </label>
@@ -18,7 +18,7 @@ module DataPorter
18
18
  @cable_channel_prefix = "data_porter"
19
19
  @context_builder = nil
20
20
  @preview_limit = 500
21
- @enabled_sources = %i[csv json api]
21
+ @enabled_sources = %i[csv json api xlsx]
22
22
  @scope = nil
23
23
  end
24
24
  end
@@ -0,0 +1,68 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "creek"
4
+ require "tempfile"
5
+
6
+ module DataPorter
7
+ module Sources
8
+ class Xlsx < Base
9
+ def initialize(data_import, file_path: nil)
10
+ super(data_import)
11
+ @file_path = file_path
12
+ end
13
+
14
+ def fetch
15
+ rows = parse_sheet(target_sheet)
16
+ rows.map { |row| apply_csv_mapping(row) }
17
+ ensure
18
+ cleanup
19
+ end
20
+
21
+ private
22
+
23
+ def target_sheet
24
+ creek = Creek::Book.new(xlsx_path)
25
+ creek.sheets[sheet_index]
26
+ end
27
+
28
+ def parse_sheet(sheet)
29
+ rows = sheet.simple_rows.to_a
30
+ return [] if rows.size <= 1
31
+
32
+ headers = rows.first.values.map(&:to_s)
33
+ rows.drop(1).map { |row| build_row(headers, row) }
34
+ end
35
+
36
+ def build_row(headers, row)
37
+ values = row.values.map { |v| v&.to_s }
38
+ headers.zip(values).to_h
39
+ end
40
+
41
+ def xlsx_path
42
+ @file_path || download_to_tempfile
43
+ end
44
+
45
+ def download_to_tempfile
46
+ @tempfile = Tempfile.new(["data_porter", ".xlsx"])
47
+ @tempfile.binmode
48
+ @tempfile.write(@data_import.file.download)
49
+ @tempfile.rewind
50
+ @tempfile.path
51
+ end
52
+
53
+ def sheet_index
54
+ config = @data_import.config
55
+ return 0 unless config.is_a?(Hash)
56
+
57
+ config.fetch("sheet_index", 0).to_i
58
+ end
59
+
60
+ def cleanup
61
+ return unless @tempfile
62
+
63
+ @tempfile.close
64
+ @tempfile.unlink
65
+ end
66
+ end
67
+ end
68
+ end
@@ -4,13 +4,15 @@ require_relative "sources/base"
4
4
  require_relative "sources/csv"
5
5
  require_relative "sources/json"
6
6
  require_relative "sources/api"
7
+ require_relative "sources/xlsx"
7
8
 
8
9
  module DataPorter
9
10
  module Sources
10
11
  REGISTRY = {
11
12
  api: Api,
12
13
  csv: Csv,
13
- json: Json
14
+ json: Json,
15
+ xlsx: Xlsx
14
16
  }.freeze
15
17
 
16
18
  def self.resolve(type)
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module DataPorter
4
- VERSION = "0.1.0"
4
+ VERSION = "0.2.0"
5
5
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: data_porter
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.1.0
4
+ version: 0.2.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Seryl Lounis
@@ -9,6 +9,20 @@ bindir: exe
9
9
  cert_chain: []
10
10
  date: 1980-01-02 00:00:00.000000000 Z
11
11
  dependencies:
12
+ - !ruby/object:Gem::Dependency
13
+ name: creek
14
+ requirement: !ruby/object:Gem::Requirement
15
+ requirements:
16
+ - - ">="
17
+ - !ruby/object:Gem::Version
18
+ version: '0'
19
+ type: :runtime
20
+ prerelease: false
21
+ version_requirements: !ruby/object:Gem::Requirement
22
+ requirements:
23
+ - - ">="
24
+ - !ruby/object:Gem::Version
25
+ version: '0'
12
26
  - !ruby/object:Gem::Dependency
13
27
  name: csv
14
28
  requirement: !ruby/object:Gem::Requirement
@@ -80,98 +94,20 @@ dependencies:
80
94
  - !ruby/object:Gem::Version
81
95
  version: '1.0'
82
96
  description: 'A mountable Rails engine providing a complete data import workflow:
83
- upload/configure, preview with validation, and import. Supports CSV, JSON, and API
84
- sources with a simple DSL for defining import targets.'
97
+ upload/configure, preview with validation, and import. Supports CSV, JSON, XLSX,
98
+ and API sources with a simple DSL for defining import targets.'
85
99
  email:
86
100
  - seryllounis@outlook.fr
87
101
  executables: []
88
102
  extensions: []
89
103
  extra_rdoc_files: []
90
104
  files:
91
- - ".claude/commands/blog-status.md"
92
- - ".claude/commands/blog.md"
93
- - ".claude/commands/task-done.md"
94
- - ".claude/commands/tm/add-dependency.md"
95
- - ".claude/commands/tm/add-subtask.md"
96
- - ".claude/commands/tm/add-task.md"
97
- - ".claude/commands/tm/analyze-complexity.md"
98
- - ".claude/commands/tm/analyze-project.md"
99
- - ".claude/commands/tm/auto-implement-tasks.md"
100
- - ".claude/commands/tm/command-pipeline.md"
101
- - ".claude/commands/tm/complexity-report.md"
102
- - ".claude/commands/tm/convert-task-to-subtask.md"
103
- - ".claude/commands/tm/expand-all-tasks.md"
104
- - ".claude/commands/tm/expand-task.md"
105
- - ".claude/commands/tm/fix-dependencies.md"
106
- - ".claude/commands/tm/help.md"
107
- - ".claude/commands/tm/init-project-quick.md"
108
- - ".claude/commands/tm/init-project.md"
109
- - ".claude/commands/tm/install-taskmaster.md"
110
- - ".claude/commands/tm/learn.md"
111
- - ".claude/commands/tm/list-tasks-by-status.md"
112
- - ".claude/commands/tm/list-tasks-with-subtasks.md"
113
- - ".claude/commands/tm/list-tasks.md"
114
- - ".claude/commands/tm/next-task.md"
115
- - ".claude/commands/tm/parse-prd-with-research.md"
116
- - ".claude/commands/tm/parse-prd.md"
117
- - ".claude/commands/tm/project-status.md"
118
- - ".claude/commands/tm/quick-install-taskmaster.md"
119
- - ".claude/commands/tm/remove-all-subtasks.md"
120
- - ".claude/commands/tm/remove-dependency.md"
121
- - ".claude/commands/tm/remove-subtask.md"
122
- - ".claude/commands/tm/remove-subtasks.md"
123
- - ".claude/commands/tm/remove-task.md"
124
- - ".claude/commands/tm/setup-models.md"
125
- - ".claude/commands/tm/show-task.md"
126
- - ".claude/commands/tm/smart-workflow.md"
127
- - ".claude/commands/tm/sync-readme.md"
128
- - ".claude/commands/tm/tm-main.md"
129
- - ".claude/commands/tm/to-cancelled.md"
130
- - ".claude/commands/tm/to-deferred.md"
131
- - ".claude/commands/tm/to-done.md"
132
- - ".claude/commands/tm/to-in-progress.md"
133
- - ".claude/commands/tm/to-pending.md"
134
- - ".claude/commands/tm/to-review.md"
135
- - ".claude/commands/tm/update-single-task.md"
136
- - ".claude/commands/tm/update-task.md"
137
- - ".claude/commands/tm/update-tasks-from-id.md"
138
- - ".claude/commands/tm/validate-dependencies.md"
139
- - ".claude/commands/tm/view-models.md"
140
- - ".env.example"
141
- - ".mcp.json"
142
- - ".taskmaster/CLAUDE.md"
143
- - ".taskmaster/config.json"
144
- - ".taskmaster/docs/prd.txt"
145
- - ".taskmaster/state.json"
146
- - ".taskmaster/tasks/task_001.md"
147
- - ".taskmaster/tasks/task_002.md"
148
- - ".taskmaster/tasks/task_003.md"
149
- - ".taskmaster/tasks/task_004.md"
150
- - ".taskmaster/tasks/task_005.md"
151
- - ".taskmaster/tasks/task_006.md"
152
- - ".taskmaster/tasks/task_007.md"
153
- - ".taskmaster/tasks/task_008.md"
154
- - ".taskmaster/tasks/task_009.md"
155
- - ".taskmaster/tasks/task_010.md"
156
- - ".taskmaster/tasks/task_011.md"
157
- - ".taskmaster/tasks/task_012.md"
158
- - ".taskmaster/tasks/task_013.md"
159
- - ".taskmaster/tasks/task_014.md"
160
- - ".taskmaster/tasks/task_015.md"
161
- - ".taskmaster/tasks/task_016.md"
162
- - ".taskmaster/tasks/task_017.md"
163
- - ".taskmaster/tasks/task_018.md"
164
- - ".taskmaster/tasks/task_019.md"
165
- - ".taskmaster/tasks/task_020.md"
166
- - ".taskmaster/tasks/tasks.json"
167
- - ".taskmaster/templates/example_prd.txt"
168
- - ".taskmaster/templates/example_prd_rpg.txt"
169
105
  - CHANGELOG.md
170
- - CLAUDE.md
171
106
  - CODE_OF_CONDUCT.md
172
107
  - CONTRIBUTING.md
173
108
  - LICENSE
174
109
  - README.md
110
+ - ROADMAP.md
175
111
  - Rakefile
176
112
  - app/assets/stylesheets/data_porter/application.css
177
113
  - app/channels/data_porter/import_channel.rb
@@ -184,29 +120,7 @@ files:
184
120
  - app/views/data_porter/imports/index.html.erb
185
121
  - app/views/data_porter/imports/new.html.erb
186
122
  - app/views/data_porter/imports/show.html.erb
187
- - config/database.yml
188
123
  - config/routes.rb
189
- - docs/SPEC.md
190
- - docs/UI.md
191
- - docs/blog/001-why-build-a-data-import-engine.md
192
- - docs/blog/002-scaffolding-a-rails-engine.md
193
- - docs/blog/003-configuration-dsl.md
194
- - docs/blog/004-store-model-jsonb.md
195
- - docs/blog/005-target-dsl.md
196
- - docs/blog/006-parsing-csv-sources.md
197
- - docs/blog/007-orchestrator.md
198
- - docs/blog/008-actioncable-stimulus.md
199
- - docs/blog/009-phlex-ui-components.md
200
- - docs/blog/010-controllers-routing.md
201
- - docs/blog/011-generators.md
202
- - docs/blog/012-json-api-sources.md
203
- - docs/blog/013-testing-rails-engine.md
204
- - docs/blog/014-dry-run.md
205
- - docs/blog/015-publishing-retro.md
206
- - docs/blog/016-erb-view-templates.md
207
- - docs/blog/017-showcase-final-retro.md
208
- - docs/blog/BACKLOG.md
209
- - docs/blog/SERIES.md
210
124
  - docs/screenshots/index-with-previewing.jpg
211
125
  - docs/screenshots/index.jpg
212
126
  - docs/screenshots/modal-new-import.jpg
@@ -233,6 +147,7 @@ files:
233
147
  - lib/data_porter/sources/base.rb
234
148
  - lib/data_porter/sources/csv.rb
235
149
  - lib/data_porter/sources/json.rb
150
+ - lib/data_porter/sources/xlsx.rb
236
151
  - lib/data_porter/store_models/error.rb
237
152
  - lib/data_porter/store_models/import_record.rb
238
153
  - lib/data_porter/store_models/report.rb
@@ -1,10 +0,0 @@
1
- Show the status of the DataPorter blog article series.
2
-
3
- 1. List all existing articles in `docs/blog/` with their title, part number, and published status
4
- 2. Check `task-master list` for completed tasks since the last article
5
- 3. Suggest 2-3 potential next article topics based on:
6
- - Recently completed tasks not yet covered
7
- - The natural progression of the series
8
- - Topics that would be interesting for the Ruby/Rails community
9
-
10
- Format the output as a clear dashboard.
@@ -1,109 +0,0 @@
1
- Write the next blog article for the DataPorter series.
2
-
3
- Topic/title hint: $ARGUMENTS
4
-
5
- ## Process
6
-
7
- 1. **Read the series plan** at `docs/blog/SERIES.md` to identify which part this is and what it should cover.
8
-
9
- 2. **Gather context:**
10
- - `task-master list` to see completed tasks
11
- - `git log --oneline -20` for recent commits
12
- - Read the source files listed in the series plan for this part
13
- - Read any previous articles in `docs/blog/` to maintain continuity
14
-
15
- 3. **Write the article** in `docs/blog/NNN-slug.md` (NNN = part number, zero-padded).
16
-
17
- 4. **Mandatory article structure:**
18
-
19
- ```markdown
20
- ---
21
- title: "Building DataPorter #N — <Title>"
22
- series: "Building DataPorter - A Data Import Engine for Rails"
23
- part: N
24
- tags: [ruby, rails, rails-engine, gem-development, <2-3 topic tags>]
25
- published: false
26
- ---
27
-
28
- # <Title>
29
-
30
- > One-line summary of what the reader will learn.
31
-
32
- ## Context
33
-
34
- Where we are in the series (1-2 sentences). Link to previous article.
35
- What we'll build in this article (clear scope).
36
-
37
- ## The problem
38
-
39
- Why do we need this? Real-world scenario. Keep it short (3-5 sentences).
40
-
41
- ## What we're building
42
-
43
- Show the end result first: a code snippet, a diagram, or a usage example.
44
- The reader should immediately understand where we're going.
45
-
46
- ## Implementation
47
-
48
- ### Step 1 — <Name>
49
-
50
- Explain the WHY, then show the code.
51
-
52
- \`\`\`ruby
53
- # path/to/file.rb
54
- <focused snippet, not full file>
55
- \`\`\`
56
-
57
- Brief explanation of what this does and why we chose this approach.
58
-
59
- ### Step 2 — <Name>
60
-
61
- (repeat pattern: WHY -> code -> explain)
62
-
63
- ### Step 3 — <Name>
64
-
65
- (repeat)
66
-
67
- ## Decisions & tradeoffs
68
-
69
- | Decision | We chose | Over | Because |
70
- |----------|----------|------|---------|
71
- | ... | ... | ... | ... |
72
-
73
- ## Testing it
74
-
75
- Show how to verify this works (spec snippet or console output).
76
-
77
- \`\`\`ruby
78
- # spec/...
79
- \`\`\`
80
-
81
- ## Recap
82
-
83
- 3-4 bullet points: what we built, what we learned.
84
-
85
- ## Next up
86
-
87
- One paragraph teasing the next article in the series. End with a hook.
88
-
89
- ---
90
-
91
- *This is part N of the series "Building DataPorter". [Previous: ...](#) | [Next: ...](#)*
92
- *Code: [GitHub repo link]*
93
- ```
94
-
95
- 5. **Writing rules:**
96
- - 5-8 minute read (~1000-1500 words)
97
- - Max 3-4 implementation steps per article
98
- - Code snippets: focused, 5-25 lines each (never full file dumps)
99
- - Every snippet has a file path comment on line 1
100
- - Explain decisions as "We chose X over Y because Z"
101
- - Use the actual code from the codebase (not made-up examples)
102
- - Conversational but technical tone: "Let's...", "Here's why..."
103
- - English only
104
- - No emojis in prose (OK in front matter tags)
105
-
106
- 6. **After writing:**
107
- - Update the article status in `docs/blog/SERIES.md` to `draft`
108
- - Show: title, word count, reading time, and the decisions table
109
- - Suggest 2-3 improvements or missing points
@@ -1,27 +0,0 @@
1
- Mark a Taskmaster task as done and check if a blog article should be generated.
2
-
3
- Task ID: $ARGUMENTS
4
-
5
- ## Process
6
-
7
- 1. **Complete the task:**
8
- ```
9
- task-master set-status --id=$ARGUMENTS --status=done
10
- ```
11
-
12
- 2. **Check blog series:** Read `docs/blog/SERIES.md` and find which blog part includes this task.
13
-
14
- 3. **Check if all tasks for that part are done:**
15
- - Run `task-master show <id>` for each task in the blog part
16
- - If ALL tasks for the part have status `done`, proceed to step 4
17
- - If some tasks are still pending/in-progress, report progress ("Part N: 2/3 tasks done")
18
-
19
- 4. **If part is ready, generate the article:**
20
- - Follow the `/blog` command process
21
- - Write the draft in `docs/blog/NNN-slug.md`
22
- - Update SERIES.md status to `draft`
23
-
24
- 5. **Show summary:**
25
- - Task completed
26
- - Blog part progress (e.g., "Part 5: 2/2 tasks done — article draft generated")
27
- - Next task suggestion via `task-master next`
@@ -1,58 +0,0 @@
1
- Add Dependency
2
-
3
- Arguments: $ARGUMENTS
4
- Add a dependency between tasks.
5
-
6
- Arguments: $ARGUMENTS
7
-
8
- Parse the task IDs to establish dependency relationship.
9
-
10
- ## Adding Dependencies
11
-
12
- Creates a dependency where one task must be completed before another can start.
13
-
14
- ## Argument Parsing
15
-
16
- Parse natural language or IDs:
17
- - "make 5 depend on 3" → task 5 depends on task 3
18
- - "5 needs 3" → task 5 depends on task 3
19
- - "5 3" → task 5 depends on task 3
20
- - "5 after 3" → task 5 depends on task 3
21
-
22
- ## Execution
23
-
24
- ```bash
25
- task-master add-dependency --id=<task-id> --depends-on=<dependency-id>
26
- ```
27
-
28
- ## Validation
29
-
30
- Before adding:
31
- 1. **Verify both tasks exist**
32
- 2. **Check for circular dependencies**
33
- 3. **Ensure dependency makes logical sense**
34
- 4. **Warn if creating complex chains**
35
-
36
- ## Smart Features
37
-
38
- - Detect if dependency already exists
39
- - Suggest related dependencies
40
- - Show impact on task flow
41
- - Update task priorities if needed
42
-
43
- ## Post-Addition
44
-
45
- After adding dependency:
46
- 1. Show updated dependency graph
47
- 2. Identify any newly blocked tasks
48
- 3. Suggest task order changes
49
- 4. Update project timeline
50
-
51
- ## Example Flows
52
-
53
- ```
54
- /taskmaster:add-dependency 5 needs 3
55
- → Task #5 now depends on Task #3
56
- → Task #5 is now blocked until #3 completes
57
- → Suggested: Also consider if #5 needs #4
58
- ```