@www.hyperlinks.space/program-kit 1.2.181818 → 18.18.18

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/fullREADME.md CHANGED
@@ -4,23 +4,31 @@
4
4
 
5
5
  <u>**In progress, contribute!**</u>
6
6
 
7
- This program is built upon [React Native](https://reactnative.dev/) by Meta and [Expo](https://expo.dev) multiplatform technologies, Windows build and executable creation achieved with [Electron Builder](https://www.electron.build/) and [Electron Forge](https://www.electronforge.io/), working in Telegram with help of [Telegram Mini Apps React SDK](http://telegram-mini-apps.com/) and [Bot API](https://core.telegram.org/bots). AI is backed by [OpenAI API](https://openai.com/ru-RU/api/), blockchain data is processed from [Swap.Coffee API](https://docs.swap.coffee/eng/user-guides/welcome).
8
-
9
- ## Program Kit
10
-
11
- To give value for other developers we decided to launch an npm package that provides a ready starter for creating a multiplatform program in one command.
12
-
13
- ```bash
14
- npx @www.hyperlinks.space/program-kit ./new-program
15
- ```
16
-
17
- Link to the package: https://www.npmjs.com/package/@www.hyperlinks.space/program-kit
7
+ This program is built upon [React Native](https://reactnative.dev/) by Meta and [Expo](https://expo.dev) multiplatform technologies, Windows build and executable creation achieved with [Electron Builder](https://www.electron.build/) and [Electron Forge](https://www.electronforge.io/), working in Telegram with help of [Telegram Mini Apps React SDK](http://telegram-mini-apps.com/), [Bot API](https://core.telegram.org/bots) and [Grammy](https://grammy.dev/). AI is backed by [OpenAI API](https://openai.com/ru-RU/api/), blockchain info is processed from [Swap.Coffee API](https://docs.swap.coffee/eng/user-guides/welcome). DB for the best user's experience we host on [Neon](https://neon.tech/).
18
8
 
19
9
  ## Program design
20
10
 
21
11
  Access [Figma](https://www.figma.com/design/53lDKAD6pRv3e0uef1DP18/TECHSYMBAL-Inc.?node-id=754-71&t=v3tmAlywNgXkTWMd-1) in real time for contributing. Contact [Seva](t.me/sevaaignatyev) in Telegram to discuss and implement.
22
12
 
23
- Copying fully or partially, usage as an inspiration for other developments are unpleasant, participation in our projects is appreciated. All core materials are available publicly for instant access worldwide and our project availability for newcomers.
13
+ All core materials are available publicly for сгккуте hyperlinks.space team members' instant and easy access worldwide and our project's availability for newcomers' research only.
14
+
15
+ ## Structure
16
+
17
+ - [`app`](./app) - Expo/React Telegram Mini App client (web/mobile screens, navigation, UI logic).
18
+ - [`ui`](./ui) - shared UI layer (components, theme tokens, and font configuration used by the app).
19
+ - [`bot`](./bot) - TypeScript Telegram bot service and runtime entrypoints.
20
+ - [`database`](./database) - database startup/migration/service scripts.
21
+ - [`ai`](./ai) - AI assistant service logic and model integration points.
22
+ - [`api`](./api) - backend API handlers and server-side endpoints.
23
+ - [`blockchain`](./blockchain) - TON/blockchain interaction logic and related helpers.
24
+ - [`telegram`](./telegram) - Telegram-specific integration utilities and adapters.
25
+ - [`windows`](./windows) - Electron desktop shell, NSIS installer config, and auto-update flow.
26
+ - [`scripts`](./scripts) - developer/ops scripts (local run, migration, release helpers).
27
+ - [`docs`](./docs) - project and operational documentation (architecture, releases, security reference, tooling).
28
+ - [`research`](./research) - exploratory notes, investigations, and proposals not yet promoted to `docs/`.
29
+ - [`backlogs`](./backlogs) - short-term planning notes and prioritized work items.
30
+ - [`assets`](./assets) - static assets used by app, installer, and branding.
31
+ - [`dist`](./dist) - generated web build output (export artifacts).
24
32
 
25
33
  ## How to fork and contribute?
26
34
 
@@ -64,15 +72,36 @@ git switch -c new-branch-for-next-update # Create and switch to a new feature br
64
72
 
65
73
  **Move in loops starting from the step 3.**
66
74
 
67
- ## Pull requests and commits requirements
75
+ ## Local deploy
68
76
 
69
- - Give pull requests and commits a proper name and description
70
- - Dedicate each pull request to an understandable area or field, each commit to a focused logical change
71
- - Check file changes in every commit pulled, no arbitrary files modifications should persist such as LF/CRLF line-ending conversion, broken/garbled text diffs, BOM added or removed, accidental "invisible" corruption from text filters
72
- - Add dependecies and packages step by step for security
73
- - An issue creation or following an existing before a pull request would be a good practice
77
+ `npm` package note: `.env.example` is included in the published package so you can use it as a reference for establishing your testing environment with `.env` file.
78
+
79
+ Before local deploy / cloud deploy, prepare these env-backed services:
80
+
81
+ 1. **Neon PostgreSQL (`DATABASE_URL`)**
82
+ - Create an account/project at [Neon](https://neon.tech/).
83
+ - Create a database and copy the connection string.
84
+ - Put it into `.env` as `DATABASE_URL=...`.
85
+ 2. **OpenAI API (`OPENAI_API_KEY`)**
86
+ - Create an account at [OpenAI Platform](https://platform.openai.com/).
87
+ - Create an API key in the API Keys page.
88
+ - Put it into `.env` as `OPENAI_API_KEY=...`.
89
+ 3. **Telegram bot token (`BOT_TOKEN`)**
90
+ - In Telegram, open [@BotFather](https://t.me/BotFather), create a test bot with `/newbot`.
91
+ - Copy the bot token and put it into `.env` as `BOT_TOKEN=...`.
92
+ 4. **Vercel project envs (for comfortable deploy/testing)**
93
+ - Create a [Vercel](https://vercel.com/) account and import this repository as a project.
94
+ - In Project Settings -> Environment Variables, set at least:
95
+ - `DATABASE_URL`
96
+ - `OPENAI_API_KEY`
97
+ - `BOT_TOKEN` (or `TELEGRAM_BOT_TOKEN`)
98
+ - Pull envs locally when needed with `vercel env pull .env.local`.
99
+
100
+ Copy env template locally:
74
101
 
75
- ## Local deploy
102
+ ```bash
103
+ cp .env.example .env
104
+ ```
76
105
 
77
106
  To start the full local stack, run:
78
107
 
@@ -82,67 +111,31 @@ npm run start
82
111
 
83
112
  This runs Expo dev server, the Telegram bot (polling mode), and local Vercel API (`vercel dev`).
84
113
 
114
+ After `npm run start`, you can test the app on real phones with Expo Go:
115
+
116
+ - Install **Expo Go** from Google Play (Android) or App Store (iOS).
117
+ - Make sure your phone and development machine are on the same network.
118
+ - Open Expo Go and scan the QR code shown in the terminal/Expo UI.
119
+ - The app will launch on the device and hot-reload on code changes.
120
+
85
121
  Isolated/local run options:
86
122
 
87
123
  - Expo only (no bot, no Vercel): `npm run start:expo`
88
124
  - Bot only (polling mode): `npm run bot:local`
89
125
  - Vercel API only: `npm run dev:vercel`
90
126
 
91
- ## Milestone snapshot package (npm)
92
-
93
- NPM release and snapshot details were moved to `docs/npm-release.md`.
94
-
95
- ### Local env setup
96
-
97
- 1. **Copy the example file** (from the repository root):
98
- ```bash
99
- cp .env.example .env
100
- ```
101
- 2. **Edit `.env`** and set at least:
102
- - **`BOT_TOKEN`** – if you run the Telegram bot locally (`npm run bot:local`).
103
- 3. **Expo app** – `npx expo start` reads env from the environment; for app-only env vars you can also put them in `.env` and use an Expo-compatible loader if you add one, or set them in the shell before running:
104
- ```bash
105
- export BOT_TOKEN=your_token
106
- npx expo start
107
- ```
108
- 4. **Bot local** – `npm run bot:local` loads `.env` from the project root (optional; you can also set `BOT_TOKEN` in the shell).
109
-
110
- The `.env` file is gitignored; do not commit it.
111
-
112
127
  ## GitHub Actions
113
128
 
114
129
  Current Actions workflows include:
115
130
 
116
- - `Vercel Deploy Test` (`.github/workflows/vercel-deploy-test-envs.yml`) - manual web deploy to Vercel.
117
- - `NPM Package Release` (`.github/workflows/npm-package-release.yml`) - npm/GitHub Packages release workflow.
118
- - `Electron EXE Release` and `Electron Forge EXE Release` - manual Windows release pipelines.
119
- - `EXPO Publish` (`.github/workflows/expo-publish.yml`) - manual OTA publish with EAS CLI.
120
- - `Lint errors check` (`.github/workflows/lint-errors-check.yml`) - manual lint check.
121
-
122
- ## Expo Workflows
123
-
124
- This project uses two automation layers:
125
-
126
- - [EAS Workflows](https://docs.expo.dev/eas/workflows/get-started/) for Expo update/build/deploy flows (triggered via npm scripts from [`package.json`](./package.json)).
127
- - GitHub Actions for CI/CD tasks stored in `.github/workflows` (manual release/deploy jobs and checks).
128
-
129
- ### Previews
131
+ - [`Vercel Deploy Test`](./.github/workflows/vercel-deploy-test-envs.yml) - manual web deploy to Vercel.
132
+ - [`Electron Forge EXE Release`](./.github/workflows/electron-forge-exe-release.yml) - manual Windows release pipeline.
133
+ - [`Electron EXE Release`](./.github/workflows/electron-exe-release.yml) - manual Windows release pipeline.
134
+ - [`Lint errors check`](./.github/workflows/lint-errors-check.yml) - manual lint check.
135
+ - [`EXPO Publish`](./.github/workflows/expo-publish.yml) - manual OTA publish with EAS CLI.
136
+ - [`NPM Package Release`](./.github/workflows/npm-package-release.yml) - npm/GitHub Packages release workflow.
130
137
 
131
- Run `npm run draft` to [publish a preview update](https://docs.expo.dev/eas/workflows/examples/publish-preview-update/) of your project, which can be viewed in Expo Go or in a development build.
132
-
133
- ### Development Builds
134
-
135
- Run `npm run development-builds` to [create a development build](https://docs.expo.dev/eas/workflows/examples/create-development-builds/). Note - you'll need to follow the [Prerequisites](https://docs.expo.dev/eas/workflows/examples/create-development-builds/#prerequisites) to ensure you have the correct emulator setup on your machine.
136
-
137
- ### Production Deployments
138
-
139
- Run `npm run deploy` to [deploy to production](https://docs.expo.dev/eas/workflows/examples/deploy-to-production/). Note - you'll need to follow the [Prerequisites](https://docs.expo.dev/eas/workflows/examples/deploy-to-production/#prerequisites) to ensure you're set up to submit to the Apple and Google stores.
140
-
141
- ## Hosting
142
-
143
- Expo offers hosting for websites and API functions via EAS Hosting. See the [Getting Started](https://docs.expo.dev/eas/hosting/get-started/) guide to learn more.
144
-
145
- ### Deploy web build to Vercel
138
+ ## Deploy to Vercel
146
139
 
147
140
  From the repository root, deploy the static web build to Vercel production:
148
141
 
@@ -152,7 +145,7 @@ vercel --prod
152
145
 
153
146
  Deploying from repository root makes this folder the project root, so `api/bot` is deployed and no Root Directory setting is needed. The project is configured so Vercel runs `npx expo export -p web` and serves the `dist/` output. Link the project first with `vercel` if needed.
154
147
 
155
- ## Telegram bot (Grammy)
148
+ ## Telegram bot
156
149
 
157
150
  The bot is extended beyond a basic "Hello" and "Start program" responder and now supports AI streaming and threads.
158
151
 
@@ -174,6 +167,84 @@ The bot is extended beyond a basic "Hello" and "Start program" responder and now
174
167
  - Run full local stack (Expo + bot + Vercel): `npm run start`
175
168
  - Keep production and local bot tokens separate when possible to avoid webhook/polling conflicts.
176
169
 
177
- ## Where to discuss the project?
170
+ ## Pull requests and commits requirements
171
+
172
+ - Give pull requests and commits a proper name and description
173
+ - Dedicate each pull request to an understandable area or field, each commit to a focused logical change
174
+ - Check file changes in every commit pulled, no arbitrary files modifications should persist such as LF/CRLF line-ending conversion, broken/garbled text diffs, BOM added or removed, accidental "invisible" corruption from text filters
175
+ - Add dependecies and packages step by step for security
176
+ - An issue creation or following an existing before a pull request would be a good practice
177
+
178
+ ## Expo Workflows
179
+
180
+ [EAS Workflows](https://docs.expo.dev/eas/workflows/get-started/) are here for Expo update/build/deploy flows (triggered via npm scripts from [`package.json`](./package.json)).
181
+
182
+ ## Previews
183
+
184
+ Run `npm run draft` to [publish a preview update](https://docs.expo.dev/eas/workflows/examples/publish-preview-update/) of your project, which can be viewed in Expo Go or in a development build.
185
+
186
+ ## Development Builds
187
+
188
+ Run `npm run development-builds` to [create a development build](https://docs.expo.dev/eas/workflows/examples/create-development-builds/). Note - you'll need to follow the [Prerequisites](https://docs.expo.dev/eas/workflows/examples/create-development-builds/#prerequisites) to ensure you have the correct emulator setup on your machine.
189
+
190
+ ## Expo envs setup
191
+
192
+ **Expo app** – `npx expo start` reads env from the environment; for app-only env vars you can also put them in `.env` and use an Expo-compatible loader if you add one, or set them in the shell before running:
193
+ ```bash
194
+ export BOT_TOKEN=your_token
195
+ npx expo start
196
+ ```
197
+
198
+ ## GitLab access
199
+
200
+ GitHub and GitLab repositories are identical. If you want to contribute through GitLab, get access from [@staindart](https://github.com/staindart).
201
+
202
+ If you can push to **both** [GitHub](https://github.com/HyperlinksSpace/HyperlinksSpaceProgram) and [GitLab](https://gitlab.com/hyperlinks.space/HyperlinksSpaceProgram) directly, we ask you to configure Git so pushes keep **both** hosts in sync: the repositories are the same; avoid updating only one side.
203
+
204
+ 1. **Keep `origin` on GitHub for fetch and the first push URL.** If you cloned from GitHub, this is already true: `origin` is where `git pull` / `git fetch origin` get updates. We standardize on GitHub for **incoming** history from `origin` so your local `main` tracks `origin/main` on GitHub.
205
+
206
+ 2. **Register GitLab as a second push URL on `origin`.** Git allows multiple **push** URLs per remote name, but only one **fetch** URL. Adding GitLab here means a single `git push origin <branch>` (or the IDE **Sync** push step) sends the same commits to **both** GitHub and GitLab without a second command.
207
+
208
+ ```bash
209
+ git remote set-url --add --push origin https://gitlab.com/hyperlinks.space/HyperlinksSpaceProgram.git
210
+ ```
211
+
212
+ Run this once per clone; it does not change where you fetch from.
213
+
214
+ 3. **Add a separate remote named `gitlab`.** Because `origin`’s fetch URL stays on GitHub, `git fetch origin` never downloads refs from GitLab. The extra remote lets you run `git fetch gitlab` when you need to compare or merge with the GitLab copy (for example if CI or another contributor updated GitLab only).
215
+
216
+ ```bash
217
+ git remote add gitlab https://gitlab.com/hyperlinks.space/HyperlinksSpaceProgram.git
218
+ ```
219
+
220
+ Note, that GitHub and GitLab URL's are a little different :)
221
+
222
+ If `gitlab` already exists with a wrong URL, use `git remote set-url gitlab https://gitlab.com/hyperlinks.space/HyperlinksSpaceProgram.git` instead.
223
+
224
+ 4. **Verify** with `git remote -v`. You should see GitHub on fetch/push for `origin`, GitLab as the second `origin` push line, and `gitlab` for fetch/push to GitLab:
225
+
226
+ ```text
227
+ gitlab https://gitlab.com/hyperlinks.space/HyperlinksSpaceProgram.git (fetch)
228
+ gitlab https://gitlab.com/hyperlinks.space/HyperlinksSpaceProgram.git (push)
229
+ origin https://github.com/HyperlinksSpace/HyperlinksSpaceProgram.git (fetch)
230
+ origin https://github.com/HyperlinksSpace/HyperlinksSpaceProgram.git (push)
231
+ origin https://gitlab.com/hyperlinks.space/HyperlinksSpaceProgram.git (push)
232
+ ```
233
+
234
+ **GitLab HTTPS access:** GitLab.com does not use **fine-grained** personal access tokens for Git-over-HTTPS (`git push` / `git fetch`). Create a **legacy** personal access token under GitLab → **Edit profile** → **Access tokens** with scopes **`read_repository`** and **`write_repository`**, as described in the official guide: [Personal access tokens](https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html). Use your GitLab username and the token as the password when Git prompts. GitHub authentication stays separate (for example `gh auth login` or your existing GitHub credential).
235
+
236
+ ## Program Kit
237
+
238
+ To make it easier for developers to create multiplatform programs with us, we decided to launch an npm package that provides a ready starter for creating such a program basis in one command.
239
+
240
+ ```bash
241
+ npx @www.hyperlinks.space/program-kit ./new-program
242
+ ```
243
+
244
+ Link to the package: https://www.npmjs.com/package/@www.hyperlinks.space/program-kit
245
+
246
+ The **npm registry page** shows a separate package-oriented description: [`npmReadMe.md`](./npmReadMe.md) in the repo root. At publish time the [NPM Package Release](.github/workflows/npm-package-release.yml) workflow copies the main [`README.md`](./README.md) to `fullREADME.md`, then replaces `README.md` with the contents of `npmReadMe.md` so `npm pack` / `npm publish` ship the shorter readme as the package readme (npm always surfaces `README.md` from the tarball). Snapshot channels, tags, and local `npm pack` checks are in [`docs/npm-release.md`](./docs/npm-release.md).
247
+
248
+ ## Project discussions
178
249
 
179
250
  This repository has [GitHub Discussions](https://github.com/HyperlinksSpace/HyperlinksSpaceProgram/discussions) opened, as well you can join our [Telegram Chat](https://t.me/HyperlinksSpaceChat) and [Channel](https://t.me/HyperlinksSpace).
package/index.js ADDED
@@ -0,0 +1,3 @@
1
+ // Earliest bundle entry: polyfill Node `Buffer` before expo-router loads route modules (@ton/* uses it at load time).
2
+ import "./polyfills/buffer";
3
+ import "expo-router/entry";
package/package.json CHANGED
@@ -9,6 +9,7 @@
9
9
  "program-kit": "scripts/program-kit-init.cjs"
10
10
  },
11
11
  "files": [
12
+ ".env.example",
12
13
  "README.md",
13
14
  "fullREADME.md",
14
15
  "npmReadMe.md",
@@ -39,8 +40,8 @@
39
40
  "type": "git",
40
41
  "url": "https://github.com/HyperlinksSpace/HyperlinksSpaceBot.git"
41
42
  },
42
- "main": "expo-router/entry",
43
- "version": "1.2.181818",
43
+ "main": "index.js",
44
+ "version": "18.18.18",
44
45
  "type": "module",
45
46
  "engines": {
46
47
  "node": ">=18 <=22"
@@ -161,6 +162,10 @@
161
162
  "@react-navigation/native": "^7.1.6",
162
163
  "@swap-coffee/sdk": "^1.5.4",
163
164
  "@tma.js/sdk-react": "^3.0.16",
165
+ "@ton/core": "^0.63.1",
166
+ "@ton/crypto": "^3.3.0",
167
+ "@ton/ton": "^16.2.3",
168
+ "buffer": "^6.0.3",
164
169
  "electron-updater": "^6.8.3",
165
170
  "expo": "~54.0.0",
166
171
  "expo-blur": "~15.0.8",
@@ -1,8 +1,8 @@
1
1
  /**
2
- * Quick check that api/base.ts works. Run: npx tsx scripts/test-api-base.ts
2
+ * Quick check that api/_base.ts works. Run: npx tsx scripts/test-api-base.ts
3
3
  * In Node (no window), getApiBaseUrl() uses Vercel env or falls back to http://localhost:3000.
4
4
  */
5
- import { getApiBaseUrl, buildApiUrl } from "../api/base.js";
5
+ import { getApiBaseUrl, buildApiUrl } from "../api/_base.js";
6
6
 
7
7
  const base = getApiBaseUrl();
8
8
  const full = buildApiUrl("/api/telegram");
package/telegram/post.ts CHANGED
@@ -7,6 +7,7 @@ import {
7
7
  normalizeUsername,
8
8
  upsertUserFromTma,
9
9
  } from '../database/users.js';
10
+ import { getDefaultWalletByUsername } from '../database/wallets.js';
10
11
 
11
12
  const LOG_TAG = '[api/telegram]';
12
13
 
@@ -302,10 +303,53 @@ export async function handlePost(
302
303
  const dbStart = Date.now();
303
304
  try {
304
305
  await upsertUserFromTma({ telegramUsername, locale });
306
+ const wallet = await getDefaultWalletByUsername(telegramUsername);
305
307
  log('db_upsert_done', {
306
308
  dbMs: Date.now() - dbStart,
307
309
  elapsedMs: Date.now() - startMs,
310
+ hasWallet: !!wallet,
308
311
  });
312
+
313
+ if (wallet) {
314
+ log('success', {
315
+ telegramUsername,
316
+ hasWallet: true,
317
+ totalMs: Date.now() - startMs,
318
+ });
319
+ return new Response(
320
+ JSON.stringify({
321
+ ok: true,
322
+ telegram_username: telegramUsername,
323
+ has_wallet: true,
324
+ wallet: {
325
+ id: wallet.id,
326
+ wallet_address: wallet.wallet_address,
327
+ wallet_blockchain: wallet.wallet_blockchain,
328
+ wallet_net: wallet.wallet_net,
329
+ type: wallet.type,
330
+ label: wallet.label,
331
+ is_default: wallet.is_default,
332
+ source: wallet.source,
333
+ },
334
+ }),
335
+ { status: 200, headers: { 'content-type': 'application/json' } },
336
+ );
337
+ }
338
+
339
+ log('success', {
340
+ telegramUsername,
341
+ hasWallet: false,
342
+ totalMs: Date.now() - startMs,
343
+ });
344
+ return new Response(
345
+ JSON.stringify({
346
+ ok: true,
347
+ telegram_username: telegramUsername,
348
+ has_wallet: false,
349
+ wallet_required: true,
350
+ }),
351
+ { status: 200, headers: { 'content-type': 'application/json' } },
352
+ );
309
353
  } catch (e) {
310
354
  logErr('db_upsert_failed', e);
311
355
  return new Response(
@@ -317,12 +361,4 @@ export async function handlePost(
317
361
  );
318
362
  }
319
363
 
320
- log('success', {
321
- telegramUsername,
322
- totalMs: Date.now() - startMs,
323
- });
324
- return new Response(
325
- JSON.stringify({ ok: true, telegram_username: telegramUsername }),
326
- { status: 200, headers: { 'content-type': 'application/json' } },
327
- );
328
364
  }
@@ -1,94 +0,0 @@
1
- ## AI & Search bar input behaviour
2
-
3
- This document describes how the text in the global AI & Search bottom bar should behave as the user types, matching the Flutter implementation.
4
-
5
- ---
6
-
7
- ### 1. Reference states (pictures sequence)
8
-
9
- The reference images show a sequence of states for a long line of text; they illustrate how the bar grows and then turns into a scrolling window:
10
-
11
- 1. **Full text, no bar**
12
- Long multi‑line text fills a tall content area. This is effectively the raw content without the constraints of the bottom bar.
13
-
14
- 2. **Initial bar: single line + arrow**
15
- - Only a single line of text is visible.
16
- - The text baseline is horizontally aligned with the apply arrow icon on the right.
17
- - There is empty space above the line; bar height is at its minimum.
18
-
19
- 3. **Unconstrained multi‑line text**
20
- - Text has grown to multiple lines in a taller, unbounded view (again, this is the raw content).
21
-
22
- 4. **Growing bar: multiple lines + arrow**
23
- - The bottom bar has increased in height to show multiple lines.
24
- - As lines are added, the **space above the text shrinks**, but the **last visible line remains on the same vertical level as the arrow**.
25
- - Visually, the bar grows upwards while the arrow + last line baseline stays fixed.
26
-
27
- 5. **Very long text, no bar**
28
- - The entire long text block is visible in a tall area, showing how much total content exists.
29
-
30
- 6. **Capped bar height: scrolling window**
31
- - The bottom bar height is now capped (e.g. at 180 px).
32
- - The visible area becomes a **fixed‑height window** into the text:
33
- - Older lines at the top continue moving up and eventually disappear under the **top edge** of the bar as more text is entered.
34
- - The **last visible line stays aligned with the arrow baseline** at the bottom of the bar. The typing position does not move vertically once the bar has reached its maximum height.
35
-
36
- ---
37
-
38
- ### 2. Detailed behaviour by line count
39
-
40
- #### 1–7 lines: growing bar
41
-
42
- - For each new line from 1 up to 7:
43
- - The **bottom bar height increases** by exactly one line height (20 px).
44
- - The height formula is:
45
- \[
46
- \text{height} = 20\text{ (top padding)} + N \times 20\text{ (lines)} + 20\text{ (bottom padding)}, \quad 1 \le N \le 7.
47
- \]
48
- - The **last line is always on the same baseline as the arrow** on the right.
49
- - Visually, the bar grows **upwards**; the arrow + last line stay fixed at the bottom.
50
-
51
- #### 8 lines: text reaches the top edge
52
-
53
- - When the **8th line** appears:
54
- - The text block now reaches the **top edge of the bottom bar**.
55
- - The bar height is at its **maximum** (e.g. 180 px).
56
- - All 8 lines are still visible at once, from the top edge down to the arrow.
57
-
58
- #### 9 lines: full‑height text area, one line hidden
59
-
60
- - When the **9th line** appears:
61
- - The **scrollable text area is exactly 180 px high**, the same as the bar.
62
- - The **last line remains aligned with the arrow** at the bottom.
63
- - The **topmost line (1st)** is now hidden just above the top edge of the bar.
64
- - If the user scrolls, they can reveal all 9 lines, because:
65
- \[
66
- 9 \times 20\text{ px} = 180\text{ px},
67
- \]
68
- so all 9 lines can fit into the bar’s full height when scrolled to the appropriate position.
69
-
70
- #### 9+ lines: fixed bar, 9‑line scrolling window
71
-
72
- - For **any number of lines ≥ 9**:
73
- - The bar height stays fixed at its maximum (e.g. 180 px).
74
- - The **scrollable area always occupies the full bar height** (180 px).
75
- - At any moment:
76
- - Up to **9 lines are visible** in the window.
77
- - The **bottom (last visible) line stays aligned with the arrow** while typing.
78
- - Older lines scroll upwards and are hidden above the top edge; the user can scroll to reveal them.
79
-
80
- ---
81
-
82
- ### 3. Implementation‑oriented summary
83
-
84
- - **Line height & padding**
85
- - Line height: 20 px.
86
- - Top padding: 20 px.
87
- - Bottom padding: 20 px.
88
-
89
- - **Bar growth vs. scroll mode**
90
- - For 1–7 lines, bar height grows; arrow + last line baseline are fixed.
91
- - From the 8th line onward, the bar stays at max height; the input switches to a scrollable window that:
92
- - Always keeps the caret / last line baseline aligned with the arrow.
93
- - Hides older lines under the top edge while allowing them to be revealed by scrolling.
94
-
@@ -1,124 +0,0 @@
1
- # Plan: Implement messages table in the bot
2
-
3
- **Goal:** Persist bot messages in the `messages` table and use the DB for "no message mixing" (serverless). Optionally use thread history for AI context later.
4
-
5
- **Existing:** `app/database/messages.ts` has `insertMessage`, `getThreadHistory`, `getMaxTelegramUpdateIdForThread`. Schema in `start.ts`; table created by `db:migrate`. Bot: `responder.ts` handles text/caption, gets `ctx.from`, `message_thread_id`, calls `transmit`/`transmitStream`, replies. No DB persistence today; no thread history for AI.
6
-
7
- ---
8
-
9
- ## Best possible implementation (target design)
10
-
11
- **Split of responsibility**
12
-
13
- - **AI layer** owns all message persistence and context. **Bot** (and later TMA) own only transport and, for the bot, mixing prevention.
14
-
15
- **AI side (single place for persistence and context)**
16
-
17
- - Receives every request with: `input`, `user_telegram`, `thread_id`, `type` (`'bot'` | `'app'`), and optionally `telegram_update_id` (bot only).
18
- - **Claim / user message:** Inserts the user message (with `telegram_update_id` when provided). If insert returns `null` (unique violation), returns a **skipped** result so the caller does not send anything.
19
- - **Context:** Loads `getThreadHistory(...)` for that thread, converts to the format the model expects (e.g. `messages[]`), and passes current `input` + history to the model.
20
- - **Assistant message:** After a successful model response, inserts the assistant message (no `telegram_update_id`).
21
- - **Result:** One code path for “what gets stored” and “what context the model sees”. Bot and TMA both call this same layer; no duplicate insert logic in each client.
22
-
23
- **Bot side (mixing only)**
24
-
25
- - Resolves `user_telegram`, `thread_id`, `update_id` from `ctx`, and passes them into the AI call (including `telegram_update_id`).
26
- - If AI returns **skipped** (claim insert failed), returns without calling AI again and without sending any reply or draft.
27
- - Before **each** draft send and before the **final** reply: calls `getMaxTelegramUpdateIdForThread(user_telegram, thread_id, 'bot')`. If `max !== our update_id`, aborts (does not send). No message writes in the bot; only this read for mixing.
28
- - Sends drafts and final reply as today; does not call `insertMessage` itself.
29
-
30
- **TMA**
31
-
32
- - Calls the same AI layer with `user_telegram`, `thread_id`, `type: 'app'`. No `telegram_update_id`. Same persistence (user + assistant) and same history loading. No mixing logic unless we add a TMA-specific mechanism later (e.g. client request id + uniqueness).
33
-
34
- **Data flow (bot)**
35
-
36
- 1. User sends a message → webhook → bot handler.
37
- 2. Bot: resolve `user_telegram`, `thread_id`, `update_id`; call AI with `input`, `user_telegram`, `thread_id`, `type: 'bot'`, `telegram_update_id`.
38
- 3. AI: insert user message (with `telegram_update_id`). If `null` → return skipped. Else: load thread history, call model with history + current input, insert assistant message, return response (and for streaming: stream + insert assistant when done).
39
- 4. Bot: if skipped → return. Else: for each draft and for final reply, check `getMaxTelegramUpdateIdForThread`; if not ours, abort. Else send draft/reply.
40
-
41
- **Why this is best**
42
-
43
- - **Single source of truth:** All message rows and model context are created in the AI layer. Bot and TMA stay thin and consistent.
44
- - **No mixing in bot:** Mixing is entirely “check before send” + “skipped when claim fails”; no message writes in the bot.
45
- - **History by default:** AI always loads thread history and uses it for context, so conversations are coherent across turns.
46
- - **TMA-ready:** Same API for TMA (no `telegram_update_id`); mixing can be added later if needed.
47
-
48
- ---
49
-
50
- ## 1. Resolve thread identity and update_id in the bot
51
-
52
- - **user_telegram:** `normalizeUsername(ctx.from?.username)` (same as grammy upsert). If empty, we can skip persistence or still reply (plan: skip DB only when username missing).
53
- - **thread_id:** `ctx.message?.message_thread_id ?? 0` (already used in responder for `replyOptions`).
54
- - **type:** `'bot'`.
55
- - **update_id:** `ctx.update.update_id` (Grammy context has it). Must be passed into the handler or read from `ctx.update` in responder.
56
-
57
- **Where:** `responder.ts` (and optionally grammy if we need to pass update_id explicitly). Ensure we have access to `ctx.update.update_id` in `handleBotAiResponse`.
58
-
59
- ---
60
-
61
- ## 2. Insert user message first; skip if duplicate (claim by insert)
62
-
63
- - At the start of the AI flow (after we have `text`, `user_telegram`, `thread_id`), call:
64
- `insertMessage({ user_telegram, thread_id, type: 'bot', role: 'user', content: text, telegram_update_id })`.
65
- - If `insertMessage` returns `null` (unique violation → another instance or duplicate webhook), **return without calling AI or replying** (so only one handler "owns" this update).
66
-
67
- **Where:** `responder.ts`, right after we have `text` and before we set up streaming/cancellation. Requires `user_telegram` and `update_id`; user must exist in `users` (grammy already upserts before calling the handler).
68
-
69
- ---
70
-
71
- ## 3. Check "max update_id" before each send (no mixing)
72
-
73
- - Before each **draft** send and before the **final reply**, call:
74
- `getMaxTelegramUpdateIdForThread(user_telegram, thread_id, 'bot')`.
75
- - If the returned max is not equal to our `update_id`, another instance has already processed a newer user message → **abort** (do not send draft or reply). Same idea as current in-memory `isCancelled()`, but DB-backed so it works across serverless instances.
76
-
77
- **Where:** In `responder.ts`, inside `sendDraftOnce` / before `ctx.reply`: call the DB; if `max !== ourUpdateId`, treat as cancelled (return / skip send).
78
-
79
- ---
80
-
81
- ## 4. Persist assistant reply after successful send
82
-
83
- - After we send the final reply with `ctx.reply(result.output_text, replyOptions)` (and only when we actually send, not when we aborted or errored), call:
84
- `insertMessage({ user_telegram, thread_id, type: 'bot', role: 'assistant', content: result.output_text })` (no `telegram_update_id`).
85
-
86
- **Where:** `responder.ts`, after the successful `ctx.reply(...)`.
87
-
88
- ---
89
-
90
- ## 5. (Optional) Use thread history for AI context
91
-
92
- - Load history: `getThreadHistory({ user_telegram, thread_id, type: 'bot', limit })`.
93
- - Convert to the format expected by the AI (e.g. OpenAI `messages`: `{ role, content }[]`).
94
- - Pass this into the AI layer. Today `transmit`/`transmitStream` and `callOpenAiChat`/`callOpenAiChatStream` take a single `input` string; we’d need to extend the API to accept an optional `history` (or `messages`) and send a multi-turn request instead of a single user message.
95
-
96
- **Where:** New or changed code in `openai.ts` / `transmitter.ts` and call from `responder.ts` when in `chat` mode. Can be a follow-up step after 1–4.
97
-
98
- ---
99
-
100
- ## Implementation order (recommended)
101
-
102
- | Step | What | Files |
103
- |------|------|--------|
104
- | 1 | Resolve and pass `user_telegram`, `thread_id`, `update_id` in responder | `responder.ts` |
105
- | 2 | Insert user message at start; if `null`, return (no AI, no reply) | `responder.ts`, `database/messages.ts` (already has API) |
106
- | 3 | Before each draft and before final reply: check `getMaxTelegramUpdateIdForThread`; if max ≠ our `update_id`, abort send | `responder.ts` |
107
- | 4 | After successful `ctx.reply`, insert assistant message | `responder.ts` |
108
- | 5 | (Later) Load thread history and pass to AI | `responder.ts`, `ai/openai.ts`, `ai/transmitter.ts` |
109
-
110
- ---
111
-
112
- ## Edge cases
113
-
114
- - **No username:** If `user_telegram` is empty (no `ctx.from.username`), we can skip all DB calls and keep current behavior (reply without persisting), or refuse to reply; plan suggests skip persistence only.
115
- - **User not in DB:** `insertMessage` uses FK to `users(telegram_username)`. Grammy already upserts on message, so the user should exist. If we ever process before upsert, we’d get an FK error; keep upsert as first step in grammy (current behavior).
116
- - **Schema not run:** Ensure `ensureSchema()` runs before handlers (e.g. at deploy via `db:migrate`); no change needed if already in place.
117
-
118
- ---
119
-
120
- ## Summary
121
-
122
- 1. **Tell first:** This document is the plan.
123
- 2. **Implement 1–4** so the bot persists user and assistant messages and uses the DB for "only latest wins" (no mixing in serverless).
124
- 3. **Implement 5** later to add thread history to the AI.