kahu-signalk 0.0.6 → 0.0.8

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -185,8 +185,31 @@ wanting to build a more elaborate server-side setup.
185
185
  ## Runtime Requirements
186
186
 
187
187
  - **Signal K Server:** v2.22.1+ (latest stable)
188
- - **Node.js:** 20.x or later (required by Signal K server v2.22+)
189
- - **Dependencies:** avro-js, promise-socket, sqlite/sqlite3, uuid
188
+ - **Node.js:** **22.5.0+** (built-in `node:sqlite`; no native `sqlite3` bindings).
189
+ - **Dependencies:** avro-js, promise-socket, uuid
190
+
191
+ ---
192
+
193
+ ## Node.js version (enforced)
194
+
195
+ This plugin uses **Node’s built-in SQLite** (`node:sqlite`, `DatabaseSync`) — **not** the `sqlite3` npm package. That avoids native addon / binding failures entirely.
196
+
197
+ | Requirement | Detail |
198
+ |---------------|--------|
199
+ | **Node.js** | **22.5.0 or later** (same major line Signal K recommends) |
200
+ | **Install check** | `npm install` runs a **preinstall** script; it **exits with an error** on older Node so you get a clear message instead of a runtime crash. |
201
+
202
+ If you see *“kahu-signalk requires Node.js 22.5.0 or later”*, switch the Node version used to run Signal K (e.g. [nvm](https://github.com/nvm-sh/nvm) or [Signal K’s Node guide](https://github.com/SignalK/signalk-server/wiki/Installing-and-Updating-Node.js)), then install the plugin again.
203
+
204
+ You may see an **ExperimentalWarning** about SQLite from Node; that is expected until the API is stabilized.
205
+
206
+ ### Command typo
207
+
208
+ Use `--config-dir` (with an **n**), not `--confic-dir`:
209
+ ```bash
210
+ npm start -- --config-dir /home/bs01743/Projects/KAHU/signalk-server/signalk-server/config
211
+ ```
212
+ The `--` is needed so npm passes `--config-dir` to the start script.
190
213
 
191
214
  ---
192
215
 
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2025 KAHU Earth AS
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
@@ -0,0 +1,13 @@
1
+ # KAHU Radar Hub protocol
2
+ *A radar crowdsourcing protocol*
3
+
4
+ Contribute AIS and ARPA targets from your vessel to crowdsourcing for marine safety!
5
+
6
+ This is the ptotocol specification used by our plugins and clients as well as server code (e.g. [radarhub-opencpn](https://github.com/KAHU-radar/radarhub-opencpn) and [radarhub-signalk](https://github.com/KAHU-radar/radarhub-signalk)) that lets you upload AIS and radar ARPA targets (or any NMEA) to an internet server.
7
+ The communication protocol is based on [Apache Avro](https://avro.apache.org/) and batches track points so that the overhead for each point above timestamp and lat/lon is low, meaning it is designed to be as bandwidth conservative as possible.
8
+
9
+ Note: This protocol does **not** use the Avro RPC mechanism, as it is not well supported in all languages, and adds extra requirements such as adding run-length framing to each message. Instead, it relies on a simple union of message types.
10
+
11
+ ## Database schema
12
+
13
+ The protocol includes a client side database schema used to cache tracks in an sqlite3 database. It is provided as a series of migration SQL files, to be applied in alphabetical order.
@@ -0,0 +1,24 @@
1
+ create table if not exists target (
2
+ target_id integer primary key autoincrement,
3
+ uuid varchar(36) unique
4
+ );
5
+
6
+ create table if not exists target_position (
7
+ id integer primary key autoincrement,
8
+ timestamp datetime default current_timestamp,
9
+ target_id integer references target(id),
10
+ target_distance float,
11
+ target_bearing float,
12
+ target_bearing_unit text,
13
+ target_speed float,
14
+ target_course float,
15
+ target_course_unit text,
16
+ target_distance_unit text,
17
+ target_name text,
18
+ target_status text,
19
+ latitude float,
20
+ longitude float,
21
+ target_latitude float,
22
+ target_longitude float,
23
+ sent integer default false
24
+ );
@@ -0,0 +1,3 @@
1
+ CREATE INDEX idx_sent ON target_position(sent);
2
+ CREATE INDEX idx_target_id ON target_position(target_id);
3
+ CREATE INDEX idx_sent_target_id_timestamp ON target_position(sent, target_id, timestamp);
@@ -0,0 +1,71 @@
1
+ {"name": "kahu.Proto", "type": "record",
2
+ "fields": [
3
+ {"name": "Message", "type": [
4
+ {"name": "kahu.Call", "type": "record",
5
+ "fields": [
6
+ {"name": "Call", "type": {
7
+ "name": "kahu.CallMessage", "type": "record",
8
+ "fields": [
9
+ {"name": "id", "type": "int"},
10
+ {"name": "Call", "type": [
11
+ {"name": "kahu.LoginMessage", "type": "record",
12
+ "fields": [{"name": "Login", "type": {
13
+ "name": "kahu.Login", "type": "record",
14
+ "fields": [
15
+ {"name": "apikey", "type": "string"}
16
+ ]
17
+ }}]},
18
+ {"name": "kahu.SubmitMessage", "type": "record",
19
+ "fields": [{"name": "Submit", "type": {
20
+ "name": "kahu.Submit", "type": "record",
21
+ "fields": [
22
+ {"name": "uuid", "type": ["null", "string"], "logicalType": "uuid"},
23
+ {"name": "route", "type": {
24
+ "type": "array", "items": {
25
+ "name": "kahu.LineString",
26
+ "type": "record",
27
+ "fields": [
28
+ {"name": "lat", "type": "float"},
29
+ {"name": "lon", "type": "float"},
30
+ {"name": "timestamp", "type": "float"}
31
+ ]
32
+ }}},
33
+ {"name": "nmea", "type": ["null", "string"]},
34
+ {"name": "start", "type": "long", "logicalType": "timestamp-millis"}
35
+ ]
36
+ }}]}
37
+ ]}
38
+ ]}}
39
+ ]},
40
+ {"name": "kahu.Response", "type": "record",
41
+ "fields": [
42
+ {"name": "Response", "type": {
43
+ "name": "kahu.ResponseMessage", "type": "record",
44
+ "fields": [
45
+ {"name": "id", "type": "int"},
46
+ {"name": "Response", "type": [
47
+ {"name": "kahu.ErrorResponseMessage", "type": "record",
48
+ "fields": [{"name": "Error", "type": {
49
+ "name": "kahu.ErrorResponse", "type": "record",
50
+ "fields": [
51
+ {"name": "exception", "type": "string"}
52
+ ]
53
+ }}]},
54
+ {"name": "kahu.LoginResponseMessage", "type": "record",
55
+ "fields": [{"name": "Login", "type": {
56
+ "name": "kahu.LoginResponse", "type": "record",
57
+ "fields": []
58
+ }}]},
59
+ {"name": "kahu.SubmitResponseMessage", "type": "record",
60
+ "fields": [{"name": "Submit", "type": {
61
+ "name": "kahu.SubmitResponse", "type": "record",
62
+ "fields": [
63
+ {"name": "uuid", "type": ["null", "string"], "logicalType": "uuid"}
64
+ ]
65
+ }}]}
66
+ ]}
67
+ ]}}
68
+ ]}
69
+ ]}
70
+ ]
71
+ }
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "kahu-signalk",
3
- "version": "0.0.6",
3
+ "version": "0.0.8",
4
4
  "description": "Contribute AIS and ARPA targets from your vessel to crowdsourcing for marine safety!",
5
5
  "keywords": [
6
6
  "signalk-node-server-plugin",
@@ -15,9 +15,10 @@
15
15
  },
16
16
  "main": "plugin/index.js",
17
17
  "engines": {
18
- "node": ">=20"
18
+ "node": ">=22.5.0"
19
19
  },
20
20
  "scripts": {
21
+ "preinstall": "node scripts/check-node-version.js",
21
22
  "test": "echo \"Error: no test specified passing build\" && exit 0"
22
23
  },
23
24
  "author": "Kahu <info@kahu.earth>",
@@ -29,8 +30,6 @@
29
30
  "dependencies": {
30
31
  "avro-js": "^1.12.0",
31
32
  "promise-socket": "^8.0.0",
32
- "sqlite": "^5.1.1",
33
- "sqlite3": "^6.0.1",
34
33
  "uuid": "^8.1.0"
35
34
  }
36
35
  }
@@ -1,8 +1,23 @@
1
- const sqlite3 = require('sqlite3').verbose();
2
- const sqlite = require('sqlite');
3
1
  const fs = require('fs').promises;
4
2
  const path = require('path');
5
3
 
4
+ let DatabaseSync;
5
+ try {
6
+ ({ DatabaseSync } = require('node:sqlite'));
7
+ } catch (e) {
8
+ DatabaseSync = null;
9
+ }
10
+
11
+ function requireNodeSqlite() {
12
+ if (!DatabaseSync) {
13
+ throw new Error(
14
+ 'kahu-signalk requires Node.js 22.5+ (built-in node:sqlite). ' +
15
+ 'Install Node 22 LTS as recommended for Signal K: ' +
16
+ 'https://github.com/SignalK/signalk-server/wiki/Installing-and-Updating-Node.js'
17
+ );
18
+ }
19
+ }
20
+
6
21
  class Routecache {
7
22
  constructor(migrations_dir, db_name) {
8
23
  this.db = null;
@@ -11,8 +26,25 @@ class Routecache {
11
26
  console.log("Routecache created for " + db_name + " with migrations " + migrations_dir);
12
27
  }
13
28
 
29
+ _sanitizeParam(v) {
30
+ return v === undefined ? null : v;
31
+ }
32
+
33
+ _all(sql, params = []) {
34
+ const stmt = this.db.prepare(sql);
35
+ const safe = params.map(this._sanitizeParam);
36
+ return safe.length ? stmt.all(...safe) : stmt.all();
37
+ }
38
+
39
+ _run(sql, params = []) {
40
+ const stmt = this.db.prepare(sql);
41
+ const safe = params.map(this._sanitizeParam);
42
+ return safe.length ? stmt.run(...safe) : stmt.run();
43
+ }
44
+
14
45
  async init() {
15
46
  try {
47
+ requireNodeSqlite();
16
48
  await this.openDB();
17
49
  await this.createEmpty();
18
50
  await this.migrate();
@@ -30,33 +62,30 @@ class Routecache {
30
62
  }
31
63
  }
32
64
 
33
- async doesTableExist(tableName) {
34
- const query = await this.db.all(
35
- "SELECT count(*) as count FROM sqlite_master WHERE type='table' AND name=?",
36
- [tableName]);
37
- return query[0].count > 0;
65
+ doesTableExist(tableName) {
66
+ const row = this.db
67
+ .prepare("SELECT count(*) as count FROM sqlite_master WHERE type='table' AND name=?")
68
+ .get(tableName);
69
+ return Number(row.count) > 0;
38
70
  }
39
71
 
40
72
  async destroy() {
41
- if (this.db != null) await this.db.close();
73
+ if (this.db != null) this.db.close();
42
74
  this.db = null;
43
75
  }
44
76
 
45
77
  async openDB() {
46
- this.db = await sqlite.open({
47
- filename: this.db_name,
48
- driver: sqlite3.Database
49
- });
78
+ this.db = new DatabaseSync(this.db_name);
50
79
  }
51
80
 
52
81
  async closeDB() {
53
- if (this.db != null) await this.db.close();
82
+ if (this.db != null) this.db.close();
54
83
  this.db = null;
55
84
  }
56
85
 
57
86
  async createEmpty() {
58
87
  if (await this.doesTableExist("migrations")) return;
59
- const res = await this.db.run(`
88
+ this.db.exec(`
60
89
  CREATE TABLE IF NOT EXISTS migrations (
61
90
  id integer,
62
91
  name text,
@@ -66,58 +95,51 @@ class Routecache {
66
95
  }
67
96
 
68
97
  async migrate() {
69
- const rows = await this.db.all(
70
- "SELECT max(id) as maxid FROM migrations");
98
+ const rows = this._all("SELECT max(id) as maxid FROM migrations");
71
99
  const maxId = rows[0].maxid;
100
+ const baseline = maxId == null ? 0 : Number(maxId);
72
101
 
73
102
  try {
74
103
  const files = (
75
- await fs.readdir(
76
- this.migrations_dir, { withFileTypes: true }
77
- )
78
- ).filter(
79
- (file) => file.isFile()
80
- ).map(
81
- (file) => file.name);
104
+ await fs.readdir(this.migrations_dir, { withFileTypes: true })
105
+ )
106
+ .filter((file) => file.isFile())
107
+ .map((file) => file.name);
82
108
  files.sort();
83
-
109
+
84
110
  for (const filename of files) {
85
111
  const migrationId = parseInt(filename, 10);
86
- if (migrationId > maxId) {
87
- const migrationPath = path.join(this.migrations_dir, filename);
88
- await this.runMigration(migrationId, migrationPath);
89
- }
112
+ if (!Number.isFinite(migrationId) || migrationId <= baseline) continue;
113
+ const migrationPath = path.join(this.migrations_dir, filename);
114
+ await this.runMigration(migrationId, migrationPath);
90
115
  }
91
116
  } catch (error) {
92
- throw new Error(`Unable to process migrations directory: ${error.message}`);
117
+ throw new Error(`Unable to process migrations directory: ${error.message}`);
93
118
  }
94
119
  }
95
120
 
96
121
  async runMigration(i, name) {
97
122
  console.error("Running migration ", i, ": ", name);
98
123
 
99
- const sql = await fs.readFile(name, 'utf8');
100
- await this.db.exec(sql);
101
- await this.db.run(
102
- "insert into migrations (id, name) values (?, ?)",
103
- [i, name]);
124
+ const sql = await fs.readFile(name, 'utf8');
125
+ this.db.exec(sql);
126
+ this._run("insert into migrations (id, name) values (?, ?)", [i, name]);
104
127
  }
105
128
 
106
- async insert({...props}) {
107
- const target_count = await this.db.all(`
108
- select count(*) as count from target where uuid = ?;
109
- `, [props.target_id]);
110
- if (target_count[0].count == 0) {
111
- await this.db.run(`
112
- insert into target (uuid) values (?);
113
- `, [props.target_id]);
129
+ async insert({ ...props }) {
130
+ const target_count = this._all(
131
+ `select count(*) as count from target where uuid = ?;`,
132
+ [props.target_id]
133
+ );
134
+ if (Number(target_count[0].count) === 0) {
135
+ this._run(`insert into target (uuid) values (?);`, [props.target_id]);
114
136
  }
115
- const target = await this.db.all(`
116
- select target_id from target where uuid = ?;
117
- `, [props.target_id]);
118
-
119
- await this.db.run(`
120
- insert into target_position (
137
+ const target = this._all(`select target_id from target where uuid = ?;`, [
138
+ props.target_id,
139
+ ]);
140
+
141
+ this._run(
142
+ `insert into target_position (
121
143
  target_id,
122
144
  target_distance,
123
145
  target_bearing,
@@ -135,32 +157,35 @@ class Routecache {
135
157
  ) values (
136
158
  ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
137
159
  `,
138
- [target[0].target_id,
139
- props.position?.relative?.distance,
140
- props.position?.relative?.bearing,
141
- props.position?.relative?.bearing_unit,
142
- props.speedOverGround,
143
- props.courseOverGroundTrue,
144
- 'T', // target_course_unit
145
- props.position?.relative?.distance_unit,
146
- props.name,
147
- 'T', //props.target_status,
148
- props.position?.relative?.position?.latitude,
149
- props.position?.relative?.position?.longitude,
150
- props.position?.latitude,
151
- props.position?.longitude]);
160
+ [
161
+ target[0].target_id,
162
+ props.position?.relative?.distance,
163
+ props.position?.relative?.bearing,
164
+ props.position?.relative?.bearing_unit,
165
+ props.speedOverGround,
166
+ props.courseOverGroundTrue,
167
+ 'T',
168
+ props.position?.relative?.distance_unit,
169
+ props.name,
170
+ 'T',
171
+ props.position?.relative?.position?.latitude,
172
+ props.position?.relative?.position?.longitude,
173
+ props.position?.latitude,
174
+ props.position?.longitude,
175
+ ]
176
+ );
152
177
  }
153
178
 
154
179
  async connectionStats() {
155
- const query1 = await this.db.all(`
180
+ const query1 = this._all(`
156
181
  select
157
182
  count(*) as unsent_datapoints
158
183
  from
159
184
  target_position
160
- where
185
+ where
161
186
  not sent;
162
187
  `);
163
- const query2 = await this.db.all(`
188
+ const query2 = this._all(`
164
189
  select
165
190
  count(*) as unsent_tracks
166
191
  from
@@ -172,12 +197,14 @@ class Routecache {
172
197
  not sent
173
198
  )
174
199
  `);
175
- return {unsent_datapoints: query1[0].unsent_datapoints,
176
- unsent_tracks: query2[0].unsent_tracks};
200
+ return {
201
+ unsent_datapoints: Number(query1[0].unsent_datapoints),
202
+ unsent_tracks: Number(query2[0].unsent_tracks),
203
+ };
177
204
  }
178
-
205
+
179
206
  async retrieve() {
180
- const query = await this.db.all(`
207
+ const query = this._all(`
181
208
  select
182
209
  target.uuid,
183
210
  target_position.timestamp,
@@ -201,9 +228,9 @@ class Routecache {
201
228
  target_id
202
229
  from
203
230
  target_position
204
- group by
231
+ group by
205
232
  target_id
206
- having
233
+ having
207
234
  count(*) > 1
208
235
  )
209
236
  order by
@@ -218,19 +245,17 @@ class Routecache {
218
245
  if (!query.length) return null;
219
246
 
220
247
  const res = {
221
- uuid: {"string": query[0].uuid},
248
+ uuid: { string: query[0].uuid },
222
249
  route: [],
223
250
  nmea: null,
224
- start: query[0].timestamp_epoch};
225
-
226
- let isfirst = true;
227
- let start;
251
+ start: Number(query[0].timestamp_epoch),
252
+ };
228
253
 
229
254
  for (const row of query) {
230
255
  res.route.push({
231
256
  lat: row.target_latitude,
232
257
  lon: row.target_longitude,
233
- timestamp: row.timestamp_epoch - res.start
258
+ timestamp: Number(row.timestamp_epoch) - res.start,
234
259
  });
235
260
  }
236
261
 
@@ -238,28 +263,32 @@ class Routecache {
238
263
  }
239
264
 
240
265
  async markAsSent(route_message) {
241
- const end = route_message.route[route_message.route.length - 1].timestamp + route_message.start;
266
+ const end =
267
+ route_message.route[route_message.route.length - 1].timestamp +
268
+ route_message.start;
242
269
 
243
270
  const uuid = route_message.uuid.string;
244
-
245
- const query = await this.db.run(`
246
- update
271
+
272
+ const result = this._run(
273
+ `update
247
274
  target_position
248
275
  set
249
276
  sent = 1
250
277
  where
251
278
  target_id = (select target_id from target where uuid = ?)
252
- and timestamp <= datetime(? / 1000, 'unixepoch');
253
- `, [uuid, end]);
254
-
279
+ and timestamp <= datetime(? / 1000, 'unixepoch');`,
280
+ [uuid, end]
281
+ );
282
+
255
283
  console.error(
256
- "Updated "
257
- + query.changes
258
- + " rows for "
259
- + uuid
260
- + " @ "
261
- + end
262
- + ".");
284
+ 'Updated ' +
285
+ result.changes +
286
+ ' rows for ' +
287
+ uuid +
288
+ ' @ ' +
289
+ end +
290
+ '.'
291
+ );
263
292
  }
264
293
  }
265
294
 
@@ -0,0 +1,16 @@
1
+ #!/usr/bin/env node
2
+ /**
3
+ * kahu-signalk uses Node's built-in node:sqlite (no native sqlite3 bindings).
4
+ * Requires Node 22.5+ where DatabaseSync is available.
5
+ */
6
+ const [major, minor] = process.versions.node.split('.').map(Number);
7
+ const ok = major > 22 || (major === 22 && minor >= 5);
8
+ if (!ok) {
9
+ console.error(
10
+ '\nkahu-signalk requires Node.js 22.5.0 or later.\n' +
11
+ 'This plugin uses the built-in SQLite module (node:sqlite) so it does not depend on native sqlite3 binaries.\n' +
12
+ 'Signal K recommends Node 22: https://github.com/SignalK/signalk-server/wiki/Installing-and-Updating-Node.js\n' +
13
+ `Current Node version: ${process.version}\n`
14
+ );
15
+ process.exit(1);
16
+ }