instar 0.28.76 → 0.28.78
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/dashboard/index.html +486 -0
- package/dist/cli.js +5 -8
- package/dist/cli.js.map +1 -1
- package/dist/commands/discovery.d.ts.map +1 -1
- package/dist/commands/discovery.js +2 -2
- package/dist/commands/discovery.js.map +1 -1
- package/dist/commands/init.d.ts.map +1 -1
- package/dist/commands/init.js +22 -4
- package/dist/commands/init.js.map +1 -1
- package/dist/commands/job.d.ts.map +1 -1
- package/dist/commands/job.js +2 -2
- package/dist/commands/job.js.map +1 -1
- package/dist/commands/ledgerCleanup.d.ts.map +1 -1
- package/dist/commands/ledgerCleanup.js +2 -2
- package/dist/commands/ledgerCleanup.js.map +1 -1
- package/dist/commands/listener.d.ts.map +1 -1
- package/dist/commands/listener.js +7 -12
- package/dist/commands/listener.js.map +1 -1
- package/dist/commands/nuke.d.ts.map +1 -1
- package/dist/commands/nuke.js +11 -21
- package/dist/commands/nuke.js.map +1 -1
- package/dist/commands/server.d.ts.map +1 -1
- package/dist/commands/server.js +79 -5
- package/dist/commands/server.js.map +1 -1
- package/dist/commands/setup.d.ts.map +1 -1
- package/dist/commands/setup.js +11 -15
- package/dist/commands/setup.js.map +1 -1
- package/dist/commands/slack-cli.d.ts.map +1 -1
- package/dist/commands/slack-cli.js +5 -8
- package/dist/commands/slack-cli.js.map +1 -1
- package/dist/commands/whatsapp.d.ts.map +1 -1
- package/dist/commands/whatsapp.js +2 -2
- package/dist/commands/whatsapp.js.map +1 -1
- package/dist/commands/worktree.d.ts.map +1 -1
- package/dist/commands/worktree.js +2 -2
- package/dist/commands/worktree.js.map +1 -1
- package/dist/core/AgentConnector.d.ts.map +1 -1
- package/dist/core/AgentConnector.js +9 -10
- package/dist/core/AgentConnector.js.map +1 -1
- package/dist/core/AgentRegistry.d.ts.map +1 -1
- package/dist/core/AgentRegistry.js +3 -4
- package/dist/core/AgentRegistry.js.map +1 -1
- package/dist/core/AutoDispatcher.d.ts.map +1 -1
- package/dist/core/AutoDispatcher.js +2 -2
- package/dist/core/AutoDispatcher.js.map +1 -1
- package/dist/core/AutoUpdater.d.ts.map +1 -1
- package/dist/core/AutoUpdater.js +2 -2
- package/dist/core/AutoUpdater.js.map +1 -1
- package/dist/core/AutonomousEvolution.d.ts.map +1 -1
- package/dist/core/AutonomousEvolution.js +2 -2
- package/dist/core/AutonomousEvolution.js.map +1 -1
- package/dist/core/BackupManager.d.ts.map +1 -1
- package/dist/core/BackupManager.js +2 -2
- package/dist/core/BackupManager.js.map +1 -1
- package/dist/core/BranchManager.d.ts.map +1 -1
- package/dist/core/BranchManager.js +3 -3
- package/dist/core/BranchManager.js.map +1 -1
- package/dist/core/CaffeinateManager.d.ts.map +1 -1
- package/dist/core/CaffeinateManager.js +2 -2
- package/dist/core/CaffeinateManager.js.map +1 -1
- package/dist/core/DeferredDispatchTracker.d.ts.map +1 -1
- package/dist/core/DeferredDispatchTracker.js +2 -2
- package/dist/core/DeferredDispatchTracker.js.map +1 -1
- package/dist/core/DispatchManager.d.ts.map +1 -1
- package/dist/core/DispatchManager.js +3 -4
- package/dist/core/DispatchManager.js.map +1 -1
- package/dist/core/EvolutionManager.d.ts.map +1 -1
- package/dist/core/EvolutionManager.js +2 -2
- package/dist/core/EvolutionManager.js.map +1 -1
- package/dist/core/ExecutionJournal.d.ts.map +1 -1
- package/dist/core/ExecutionJournal.js +2 -2
- package/dist/core/ExecutionJournal.js.map +1 -1
- package/dist/core/FeedbackManager.d.ts.map +1 -1
- package/dist/core/FeedbackManager.js +2 -2
- package/dist/core/FeedbackManager.js.map +1 -1
- package/dist/core/FileClassifier.d.ts.map +1 -1
- package/dist/core/FileClassifier.js +8 -17
- package/dist/core/FileClassifier.js.map +1 -1
- package/dist/core/ForegroundRestartWatcher.d.ts.map +1 -1
- package/dist/core/ForegroundRestartWatcher.js +3 -4
- package/dist/core/ForegroundRestartWatcher.js.map +1 -1
- package/dist/core/GitStateManager.d.ts.map +1 -1
- package/dist/core/GitStateManager.js +3 -12
- package/dist/core/GitStateManager.js.map +1 -1
- package/dist/core/GitSync.d.ts.map +1 -1
- package/dist/core/GitSync.js +6 -6
- package/dist/core/GitSync.js.map +1 -1
- package/dist/core/GlobalInstallCleanup.d.ts.map +1 -1
- package/dist/core/GlobalInstallCleanup.js +3 -4
- package/dist/core/GlobalInstallCleanup.js.map +1 -1
- package/dist/core/GlobalSecretStore.d.ts.map +1 -1
- package/dist/core/GlobalSecretStore.js +3 -4
- package/dist/core/GlobalSecretStore.js.map +1 -1
- package/dist/core/HandoffManager.d.ts.map +1 -1
- package/dist/core/HandoffManager.js +5 -5
- package/dist/core/HandoffManager.js.map +1 -1
- package/dist/core/JargonDetector.d.ts +28 -0
- package/dist/core/JargonDetector.d.ts.map +1 -0
- package/dist/core/JargonDetector.js +59 -0
- package/dist/core/JargonDetector.js.map +1 -0
- package/dist/core/LedgerSessionRegistry.d.ts.map +1 -1
- package/dist/core/LedgerSessionRegistry.js +2 -2
- package/dist/core/LedgerSessionRegistry.js.map +1 -1
- package/dist/core/MachineIdentity.d.ts.map +1 -1
- package/dist/core/MachineIdentity.js +2 -2
- package/dist/core/MachineIdentity.js.map +1 -1
- package/dist/core/MessagingToneGate.d.ts +42 -5
- package/dist/core/MessagingToneGate.d.ts.map +1 -1
- package/dist/core/MessagingToneGate.js +40 -6
- package/dist/core/MessagingToneGate.js.map +1 -1
- package/dist/core/ParallelDevWiring.d.ts.map +1 -1
- package/dist/core/ParallelDevWiring.js +3 -6
- package/dist/core/ParallelDevWiring.js.map +1 -1
- package/dist/core/PostUpdateMigrator.d.ts +26 -0
- package/dist/core/PostUpdateMigrator.d.ts.map +1 -1
- package/dist/core/PostUpdateMigrator.js +249 -46
- package/dist/core/PostUpdateMigrator.js.map +1 -1
- package/dist/core/ProjectMapper.d.ts.map +1 -1
- package/dist/core/ProjectMapper.js +5 -11
- package/dist/core/ProjectMapper.js.map +1 -1
- package/dist/core/RelationshipManager.d.ts.map +1 -1
- package/dist/core/RelationshipManager.js +4 -5
- package/dist/core/RelationshipManager.js.map +1 -1
- package/dist/core/SafeGitExecutor.d.ts +11 -5
- package/dist/core/SafeGitExecutor.d.ts.map +1 -1
- package/dist/core/SafeGitExecutor.js +87 -1
- package/dist/core/SafeGitExecutor.js.map +1 -1
- package/dist/core/ScopeVerifier.d.ts.map +1 -1
- package/dist/core/ScopeVerifier.js +3 -6
- package/dist/core/ScopeVerifier.js.map +1 -1
- package/dist/core/SecretStore.d.ts.map +1 -1
- package/dist/core/SecretStore.js +2 -2
- package/dist/core/SecretStore.js.map +1 -1
- package/dist/core/SharedStateLedger.d.ts.map +1 -1
- package/dist/core/SharedStateLedger.js +2 -2
- package/dist/core/SharedStateLedger.js.map +1 -1
- package/dist/core/SoulManager.d.ts.map +1 -1
- package/dist/core/SoulManager.js +3 -4
- package/dist/core/SoulManager.js.map +1 -1
- package/dist/core/StateManager.d.ts.map +1 -1
- package/dist/core/StateManager.js +4 -6
- package/dist/core/StateManager.js.map +1 -1
- package/dist/core/SyncOrchestrator.d.ts.map +1 -1
- package/dist/core/SyncOrchestrator.js +6 -7
- package/dist/core/SyncOrchestrator.js.map +1 -1
- package/dist/core/UpdateChecker.d.ts.map +1 -1
- package/dist/core/UpdateChecker.js +3 -4
- package/dist/core/UpdateChecker.js.map +1 -1
- package/dist/core/UpgradeGuideProcessor.d.ts.map +1 -1
- package/dist/core/UpgradeGuideProcessor.js +3 -4
- package/dist/core/UpgradeGuideProcessor.js.map +1 -1
- package/dist/core/WorktreeManager.d.ts.map +1 -1
- package/dist/core/WorktreeManager.js +9 -14
- package/dist/core/WorktreeManager.js.map +1 -1
- package/dist/knowledge/KnowledgeManager.d.ts.map +1 -1
- package/dist/knowledge/KnowledgeManager.js +2 -2
- package/dist/knowledge/KnowledgeManager.js.map +1 -1
- package/dist/lifeline/ServerSupervisor.d.ts +28 -0
- package/dist/lifeline/ServerSupervisor.d.ts.map +1 -1
- package/dist/lifeline/ServerSupervisor.js +171 -73
- package/dist/lifeline/ServerSupervisor.js.map +1 -1
- package/dist/lifeline/TelegramLifeline.d.ts.map +1 -1
- package/dist/lifeline/TelegramLifeline.js +10 -4
- package/dist/lifeline/TelegramLifeline.js.map +1 -1
- package/dist/lifeline/detectLaunchdSupervised.d.ts +43 -0
- package/dist/lifeline/detectLaunchdSupervised.d.ts.map +1 -0
- package/dist/lifeline/detectLaunchdSupervised.js +106 -0
- package/dist/lifeline/detectLaunchdSupervised.js.map +1 -0
- package/dist/lifeline/droppedMessages.d.ts.map +1 -1
- package/dist/lifeline/droppedMessages.js +2 -2
- package/dist/lifeline/droppedMessages.js.map +1 -1
- package/dist/memory/EpisodicMemory.d.ts.map +1 -1
- package/dist/memory/EpisodicMemory.js +2 -2
- package/dist/memory/EpisodicMemory.js.map +1 -1
- package/dist/memory/TopicMemory.d.ts.map +1 -1
- package/dist/memory/TopicMemory.js +5 -8
- package/dist/memory/TopicMemory.js.map +1 -1
- package/dist/messaging/AgentTokenManager.d.ts.map +1 -1
- package/dist/messaging/AgentTokenManager.js +2 -2
- package/dist/messaging/AgentTokenManager.js.map +1 -1
- package/dist/messaging/DropPickup.d.ts.map +1 -1
- package/dist/messaging/DropPickup.js +2 -2
- package/dist/messaging/DropPickup.js.map +1 -1
- package/dist/messaging/GitSyncTransport.d.ts.map +1 -1
- package/dist/messaging/GitSyncTransport.js +4 -6
- package/dist/messaging/GitSyncTransport.js.map +1 -1
- package/dist/messaging/MessageStore.d.ts.map +1 -1
- package/dist/messaging/MessageStore.js +3 -4
- package/dist/messaging/MessageStore.js.map +1 -1
- package/dist/messaging/TelegramAdapter.d.ts.map +1 -1
- package/dist/messaging/TelegramAdapter.js +5 -8
- package/dist/messaging/TelegramAdapter.js.map +1 -1
- package/dist/messaging/backends/BaileysBackend.d.ts.map +1 -1
- package/dist/messaging/backends/BaileysBackend.js +3 -4
- package/dist/messaging/backends/BaileysBackend.js.map +1 -1
- package/dist/messaging/local-tone-check.d.ts +61 -0
- package/dist/messaging/local-tone-check.d.ts.map +1 -0
- package/dist/messaging/local-tone-check.js +78 -0
- package/dist/messaging/local-tone-check.js.map +1 -0
- package/dist/messaging/pending-relay-store.d.ts +153 -0
- package/dist/messaging/pending-relay-store.d.ts.map +1 -0
- package/dist/messaging/pending-relay-store.js +351 -0
- package/dist/messaging/pending-relay-store.js.map +1 -0
- package/dist/messaging/secret-patterns.d.ts +35 -0
- package/dist/messaging/secret-patterns.d.ts.map +1 -0
- package/dist/messaging/secret-patterns.js +70 -0
- package/dist/messaging/secret-patterns.js.map +1 -0
- package/dist/messaging/shared/EncryptedAuthStore.d.ts.map +1 -1
- package/dist/messaging/shared/EncryptedAuthStore.js +3 -4
- package/dist/messaging/shared/EncryptedAuthStore.js.map +1 -1
- package/dist/messaging/shared/MessageLogger.d.ts.map +1 -1
- package/dist/messaging/shared/MessageLogger.js +2 -2
- package/dist/messaging/shared/MessageLogger.js.map +1 -1
- package/dist/messaging/shared/PrivacyConsent.d.ts.map +1 -1
- package/dist/messaging/shared/PrivacyConsent.js +2 -2
- package/dist/messaging/shared/PrivacyConsent.js.map +1 -1
- package/dist/messaging/shared/SessionChannelRegistry.d.ts.map +1 -1
- package/dist/messaging/shared/SessionChannelRegistry.js +2 -2
- package/dist/messaging/shared/SessionChannelRegistry.js.map +1 -1
- package/dist/messaging/system-templates.d.ts +87 -0
- package/dist/messaging/system-templates.d.ts.map +1 -0
- package/dist/messaging/system-templates.js +236 -0
- package/dist/messaging/system-templates.js.map +1 -0
- package/dist/messaging/whoami-cache.d.ts +66 -0
- package/dist/messaging/whoami-cache.d.ts.map +1 -0
- package/dist/messaging/whoami-cache.js +149 -0
- package/dist/messaging/whoami-cache.js.map +1 -0
- package/dist/moltbridge/ProfileCompiler.d.ts.map +1 -1
- package/dist/moltbridge/ProfileCompiler.js +13 -7
- package/dist/moltbridge/ProfileCompiler.js.map +1 -1
- package/dist/monitoring/CommitmentTracker.d.ts.map +1 -1
- package/dist/monitoring/CommitmentTracker.js +2 -2
- package/dist/monitoring/CommitmentTracker.js.map +1 -1
- package/dist/monitoring/CredentialProvider.d.ts.map +1 -1
- package/dist/monitoring/CredentialProvider.js +2 -2
- package/dist/monitoring/CredentialProvider.js.map +1 -1
- package/dist/monitoring/DegradationReporter.d.ts +41 -0
- package/dist/monitoring/DegradationReporter.d.ts.map +1 -1
- package/dist/monitoring/DegradationReporter.js +96 -4
- package/dist/monitoring/DegradationReporter.js.map +1 -1
- package/dist/monitoring/HealthChecker.d.ts.map +1 -1
- package/dist/monitoring/HealthChecker.js +2 -2
- package/dist/monitoring/HealthChecker.js.map +1 -1
- package/dist/monitoring/HookEventReceiver.d.ts.map +1 -1
- package/dist/monitoring/HookEventReceiver.js +2 -2
- package/dist/monitoring/HookEventReceiver.js.map +1 -1
- package/dist/monitoring/InstructionsVerifier.d.ts.map +1 -1
- package/dist/monitoring/InstructionsVerifier.js +2 -2
- package/dist/monitoring/InstructionsVerifier.js.map +1 -1
- package/dist/monitoring/PresenceProxy.d.ts.map +1 -1
- package/dist/monitoring/PresenceProxy.js +5 -8
- package/dist/monitoring/PresenceProxy.js.map +1 -1
- package/dist/monitoring/QuotaTracker.d.ts.map +1 -1
- package/dist/monitoring/QuotaTracker.js +2 -2
- package/dist/monitoring/QuotaTracker.js.map +1 -1
- package/dist/monitoring/SessionMigrator.d.ts.map +1 -1
- package/dist/monitoring/SessionMigrator.js +2 -2
- package/dist/monitoring/SessionMigrator.js.map +1 -1
- package/dist/monitoring/SessionRecovery.d.ts.map +1 -1
- package/dist/monitoring/SessionRecovery.js +2 -2
- package/dist/monitoring/SessionRecovery.js.map +1 -1
- package/dist/monitoring/TelemetryAuth.d.ts.map +1 -1
- package/dist/monitoring/TelemetryAuth.js +3 -4
- package/dist/monitoring/TelemetryAuth.js.map +1 -1
- package/dist/monitoring/TokenLedger.d.ts +130 -0
- package/dist/monitoring/TokenLedger.d.ts.map +1 -0
- package/dist/monitoring/TokenLedger.js +523 -0
- package/dist/monitoring/TokenLedger.js.map +1 -0
- package/dist/monitoring/TokenLedgerPoller.d.ts +26 -0
- package/dist/monitoring/TokenLedgerPoller.d.ts.map +1 -0
- package/dist/monitoring/TokenLedgerPoller.js +44 -0
- package/dist/monitoring/TokenLedgerPoller.js.map +1 -0
- package/dist/monitoring/TriageOrchestrator.d.ts.map +1 -1
- package/dist/monitoring/TriageOrchestrator.js +3 -4
- package/dist/monitoring/TriageOrchestrator.js.map +1 -1
- package/dist/monitoring/WorktreeReaper.d.ts.map +1 -1
- package/dist/monitoring/WorktreeReaper.js +5 -7
- package/dist/monitoring/WorktreeReaper.js.map +1 -1
- package/dist/monitoring/delivery-failure-sentinel/recovery-policy.d.ts +83 -0
- package/dist/monitoring/delivery-failure-sentinel/recovery-policy.d.ts.map +1 -0
- package/dist/monitoring/delivery-failure-sentinel/recovery-policy.js +218 -0
- package/dist/monitoring/delivery-failure-sentinel/recovery-policy.js.map +1 -0
- package/dist/monitoring/delivery-failure-sentinel.d.ts +177 -0
- package/dist/monitoring/delivery-failure-sentinel.d.ts.map +1 -0
- package/dist/monitoring/delivery-failure-sentinel.js +598 -0
- package/dist/monitoring/delivery-failure-sentinel.js.map +1 -0
- package/dist/monitoring/probes/PlatformProbe.d.ts.map +1 -1
- package/dist/monitoring/probes/PlatformProbe.js +3 -4
- package/dist/monitoring/probes/PlatformProbe.js.map +1 -1
- package/dist/monitoring/templates-drift-verifier.d.ts +109 -0
- package/dist/monitoring/templates-drift-verifier.d.ts.map +1 -0
- package/dist/monitoring/templates-drift-verifier.js +324 -0
- package/dist/monitoring/templates-drift-verifier.js.map +1 -0
- package/dist/paste/PasteManager.d.ts.map +1 -1
- package/dist/paste/PasteManager.js +5 -8
- package/dist/paste/PasteManager.js.map +1 -1
- package/dist/publishing/PrivateViewer.d.ts.map +1 -1
- package/dist/publishing/PrivateViewer.js +2 -2
- package/dist/publishing/PrivateViewer.js.map +1 -1
- package/dist/scheduler/JobScheduler.d.ts.map +1 -1
- package/dist/scheduler/JobScheduler.js +2 -2
- package/dist/scheduler/JobScheduler.js.map +1 -1
- package/dist/server/AgentServer.d.ts +22 -0
- package/dist/server/AgentServer.d.ts.map +1 -1
- package/dist/server/AgentServer.js +199 -1
- package/dist/server/AgentServer.js.map +1 -1
- package/dist/server/WebSocketManager.d.ts +11 -0
- package/dist/server/WebSocketManager.d.ts.map +1 -1
- package/dist/server/WebSocketManager.js +28 -0
- package/dist/server/WebSocketManager.js.map +1 -1
- package/dist/server/boot-id.d.ts +58 -0
- package/dist/server/boot-id.d.ts.map +1 -0
- package/dist/server/boot-id.js +121 -0
- package/dist/server/boot-id.js.map +1 -0
- package/dist/server/middleware.d.ts +14 -1
- package/dist/server/middleware.d.ts.map +1 -1
- package/dist/server/middleware.js +81 -1
- package/dist/server/middleware.js.map +1 -1
- package/dist/server/routes.d.ts +76 -0
- package/dist/server/routes.d.ts.map +1 -1
- package/dist/server/routes.js +626 -11
- package/dist/server/routes.js.map +1 -1
- package/dist/threadline/AgentDiscovery.d.ts.map +1 -1
- package/dist/threadline/AgentDiscovery.js +2 -2
- package/dist/threadline/AgentDiscovery.js.map +1 -1
- package/dist/threadline/AgentTrustManager.d.ts.map +1 -1
- package/dist/threadline/AgentTrustManager.js +2 -2
- package/dist/threadline/AgentTrustManager.js.map +1 -1
- package/dist/threadline/BackfillCore.d.ts +70 -0
- package/dist/threadline/BackfillCore.d.ts.map +1 -0
- package/dist/threadline/BackfillCore.js +117 -0
- package/dist/threadline/BackfillCore.js.map +1 -0
- package/dist/threadline/CircuitBreaker.d.ts.map +1 -1
- package/dist/threadline/CircuitBreaker.js +2 -2
- package/dist/threadline/CircuitBreaker.js.map +1 -1
- package/dist/threadline/ComputeMeter.d.ts.map +1 -1
- package/dist/threadline/ComputeMeter.js +2 -2
- package/dist/threadline/ComputeMeter.js.map +1 -1
- package/dist/threadline/ContextThreadMap.d.ts.map +1 -1
- package/dist/threadline/ContextThreadMap.js +2 -2
- package/dist/threadline/ContextThreadMap.js.map +1 -1
- package/dist/threadline/HeartbeatWatchdog.d.ts +78 -0
- package/dist/threadline/HeartbeatWatchdog.d.ts.map +1 -0
- package/dist/threadline/HeartbeatWatchdog.js +212 -0
- package/dist/threadline/HeartbeatWatchdog.js.map +1 -0
- package/dist/threadline/HeartbeatWriter.d.ts +79 -0
- package/dist/threadline/HeartbeatWriter.d.ts.map +1 -0
- package/dist/threadline/HeartbeatWriter.js +109 -0
- package/dist/threadline/HeartbeatWriter.js.map +1 -0
- package/dist/threadline/InvitationManager.d.ts.map +1 -1
- package/dist/threadline/InvitationManager.js +2 -2
- package/dist/threadline/InvitationManager.js.map +1 -1
- package/dist/threadline/ListenerSessionManager.d.ts +59 -0
- package/dist/threadline/ListenerSessionManager.d.ts.map +1 -1
- package/dist/threadline/ListenerSessionManager.js +79 -0
- package/dist/threadline/ListenerSessionManager.js.map +1 -1
- package/dist/threadline/MCPAuth.d.ts.map +1 -1
- package/dist/threadline/MCPAuth.js +2 -2
- package/dist/threadline/MCPAuth.js.map +1 -1
- package/dist/threadline/PipeSessionSpawner.d.ts.map +1 -1
- package/dist/threadline/PipeSessionSpawner.js +3 -4
- package/dist/threadline/PipeSessionSpawner.js.map +1 -1
- package/dist/threadline/RateLimiter.d.ts.map +1 -1
- package/dist/threadline/RateLimiter.js +2 -2
- package/dist/threadline/RateLimiter.js.map +1 -1
- package/dist/threadline/RelaySpawnFailureHandler.d.ts +53 -0
- package/dist/threadline/RelaySpawnFailureHandler.d.ts.map +1 -0
- package/dist/threadline/RelaySpawnFailureHandler.js +73 -0
- package/dist/threadline/RelaySpawnFailureHandler.js.map +1 -0
- package/dist/threadline/SessionLifecycle.d.ts.map +1 -1
- package/dist/threadline/SessionLifecycle.js +2 -2
- package/dist/threadline/SessionLifecycle.js.map +1 -1
- package/dist/threadline/SpawnLedger.d.ts +94 -0
- package/dist/threadline/SpawnLedger.d.ts.map +1 -0
- package/dist/threadline/SpawnLedger.js +194 -0
- package/dist/threadline/SpawnLedger.js.map +1 -0
- package/dist/threadline/SpawnNonce.d.ts +49 -0
- package/dist/threadline/SpawnNonce.d.ts.map +1 -0
- package/dist/threadline/SpawnNonce.js +99 -0
- package/dist/threadline/SpawnNonce.js.map +1 -0
- package/dist/threadline/TelegramBridge.d.ts +140 -0
- package/dist/threadline/TelegramBridge.d.ts.map +1 -0
- package/dist/threadline/TelegramBridge.js +224 -0
- package/dist/threadline/TelegramBridge.js.map +1 -0
- package/dist/threadline/TelegramBridgeConfig.d.ts +79 -0
- package/dist/threadline/TelegramBridgeConfig.d.ts.map +1 -0
- package/dist/threadline/TelegramBridgeConfig.js +168 -0
- package/dist/threadline/TelegramBridgeConfig.js.map +1 -0
- package/dist/threadline/ThreadlineBootstrap.d.ts.map +1 -1
- package/dist/threadline/ThreadlineBootstrap.js +2 -2
- package/dist/threadline/ThreadlineBootstrap.js.map +1 -1
- package/dist/threadline/ThreadlineMCPServer.d.ts.map +1 -1
- package/dist/threadline/ThreadlineMCPServer.js +5 -0
- package/dist/threadline/ThreadlineMCPServer.js.map +1 -1
- package/dist/threadline/ThreadlineObservability.d.ts +95 -0
- package/dist/threadline/ThreadlineObservability.d.ts.map +1 -0
- package/dist/threadline/ThreadlineObservability.js +310 -0
- package/dist/threadline/ThreadlineObservability.js.map +1 -0
- package/dist/threadline/WakeSocketServer.d.ts.map +1 -1
- package/dist/threadline/WakeSocketServer.js +3 -4
- package/dist/threadline/WakeSocketServer.js.map +1 -1
- package/dist/threadline/listener-daemon.d.ts.map +1 -1
- package/dist/threadline/listener-daemon.js +3 -4
- package/dist/threadline/listener-daemon.js.map +1 -1
- package/dist/users/UserManager.d.ts.map +1 -1
- package/dist/users/UserManager.js +2 -2
- package/dist/users/UserManager.js.map +1 -1
- package/dist/users/UserOnboarding.d.ts.map +1 -1
- package/dist/users/UserOnboarding.js +2 -2
- package/dist/users/UserOnboarding.js.map +1 -1
- package/dist/utils/jsonl-rotation.d.ts.map +1 -1
- package/dist/utils/jsonl-rotation.js +2 -2
- package/dist/utils/jsonl-rotation.js.map +1 -1
- package/package.json +1 -1
- package/scripts/analyze-release.js +7 -12
- package/scripts/check-contract-evidence.js +27 -10
- package/scripts/fix-better-sqlite3.cjs +0 -2
- package/scripts/instar-dev-precommit.js +0 -2
- package/scripts/lint-no-direct-destructive.js +24 -4
- package/scripts/lint-template-sha-history.ts +183 -0
- package/scripts/migrate-incident-2026-04-17.mjs +2 -2
- package/scripts/run-migration.js +500 -0
- package/scripts/test-bootstrap-relay.mjs +2 -2
- package/scripts/threadline-bridge-backfill.mjs +379 -0
- package/scripts/verify-deployed-templates.ts +87 -0
- package/src/data/builtin-manifest.json +140 -132
- package/src/templates/scripts/git-sync-gate.sh +0 -4
- package/src/templates/scripts/telegram-reply.sh +318 -13
- package/upgrades/0.28.77.md +133 -0
- package/upgrades/0.28.78.md +90 -0
- package/upgrades/side-effects/agent-health-alert-authority-routing.md +121 -0
- package/upgrades/side-effects/comprehensive-destructive-tool-containment-migration.md +82 -0
- package/upgrades/side-effects/deferral-detector-orphan-todo.md +101 -0
- package/upgrades/side-effects/lifeline-self-heal-hardening.md +151 -0
- package/upgrades/side-effects/relay-spawn-ghost-reply-phase1.md +139 -0
- package/upgrades/side-effects/telegram-delivery-robustness-layer-2.md +320 -0
- package/upgrades/side-effects/telegram-delivery-robustness-layer-3.md +202 -0
- package/upgrades/side-effects/telegram-delivery-robustness-layer-7.md +339 -0
- package/upgrades/side-effects/telegram-delivery-robustness.md +178 -0
- package/upgrades/side-effects/threadline-bridge-backfill.md +203 -0
- package/upgrades/side-effects/threadline-canonical-inbox-write.md +218 -0
- package/upgrades/side-effects/threadline-observability-tab.md +206 -0
- package/upgrades/side-effects/threadline-tg-bridge-module.md +196 -0
- package/upgrades/side-effects/threadline-tg-bridge-settings-surface.md +208 -0
- package/upgrades/side-effects/token-ledger-bounded-scan.md +230 -0
- package/upgrades/side-effects/token-ledger-phase1.md +123 -0
- package/upgrades/NEXT.md +0 -53
- /package/upgrades/side-effects/{telegram-lifeline-version-missing-info.md → 0.28.76.md} +0 -0
|
@@ -0,0 +1,320 @@
|
|
|
1
|
+
# Side-Effects Review — Telegram Delivery Robustness, Layer 2
|
|
2
|
+
|
|
3
|
+
**Version / slug:** `telegram-delivery-robustness-layer-2`
|
|
4
|
+
**Date:** 2026-04-27
|
|
5
|
+
**Author:** echo
|
|
6
|
+
**Second-pass reviewer:** subagent (see below)
|
|
7
|
+
|
|
8
|
+
## Summary of the change
|
|
9
|
+
|
|
10
|
+
Layer 2 of the Telegram Delivery Robustness spec. Builds on top of the
|
|
11
|
+
Layer 1 fix that landed on main as commit `f9b5e3bb` (PR #100). Three
|
|
12
|
+
sub-pieces:
|
|
13
|
+
|
|
14
|
+
- **2a. Durable queue substrate.** New `src/messaging/pending-relay-store.ts`
|
|
15
|
+
wraps a per-agent SQLite database at
|
|
16
|
+
`<stateDir>/state/pending-relay.<agentId>.sqlite` (mode 0600). WAL +
|
|
17
|
+
synchronous=NORMAL + busy_timeout=5000 are mandatory pragmas. Schema
|
|
18
|
+
matches spec § 2a, including the `truncated` column for the 32KB text
|
|
19
|
+
cap. Idempotent ALTER for existing DBs. A boot self-check
|
|
20
|
+
(`assertSqliteAvailable`) probes the `sqlite3` CLI and the in-process
|
|
21
|
+
`better-sqlite3` driver and emits degradation events on missing or
|
|
22
|
+
broken substrate, without ever raising. Wired into `AgentServer.start()`
|
|
23
|
+
before the listener binds.
|
|
24
|
+
|
|
25
|
+
- **2b. Script-side detector.** `src/templates/scripts/telegram-reply.sh`
|
|
26
|
+
now classifies the response code per the spec's recoverable/terminal
|
|
27
|
+
table. Recoverable codes (5xx, conn-refused, structured 403
|
|
28
|
+
`agent_id_mismatch` / `rate_limited`) trigger a Node-driven INSERT into
|
|
29
|
+
the SQLite queue (with a 5s `(topic_id, text_hash)` dedup window and a
|
|
30
|
+
32KB text cap), followed by a best-effort POST to
|
|
31
|
+
`/events/delivery-failed` on the SAME port the original send used (NOT
|
|
32
|
+
the live config port — cross-tenant safety per § 2c). Script exits 1 on
|
|
33
|
+
recoverable failure, preserving agent-visible failure semantics. A
|
|
34
|
+
`sqlite3` CLI fallback path runs only if the Node path fails.
|
|
35
|
+
|
|
36
|
+
- **2c. Server endpoint.** New `POST /events/delivery-failed` route.
|
|
37
|
+
Strict body schema (UUIDv4 `delivery_id`, hex64 `text_hash`, integer
|
|
38
|
+
caps), 16KB total body cap, 8KB text-field cap, 1KB `error_body` cap
|
|
39
|
+
with control-char stripping, per-agent token-bucket (10/s sustained,
|
|
40
|
+
burst 50). Auth-mismatched calls return a structured 403 with no body
|
|
41
|
+
echo and emit a single audit-log line. The endpoint does NOT persist —
|
|
42
|
+
it just fans out a `delivery_failed` event via the existing
|
|
43
|
+
`WebSocketManager.broadcastEvent` channel. SQLite is the source of
|
|
44
|
+
truth; the event is best-effort signal.
|
|
45
|
+
|
|
46
|
+
`PostUpdateMigrator` learns the Layer 1 shipped script's SHA-256 (already
|
|
47
|
+
on main as `5ec2eb19…`) so a second `instar update` upgrades cleanly
|
|
48
|
+
without producing a `.new` candidate.
|
|
49
|
+
|
|
50
|
+
Files touched:
|
|
51
|
+
- `src/messaging/pending-relay-store.ts` (NEW)
|
|
52
|
+
- `src/server/routes.ts` (added `createDeliveryFailedHandler` factory + `/events/delivery-failed` route registration)
|
|
53
|
+
- `src/server/AgentServer.ts` (wired boot self-check)
|
|
54
|
+
- `src/templates/scripts/telegram-reply.sh` (recoverable-class branch)
|
|
55
|
+
- `src/core/PostUpdateMigrator.ts` (added Layer 1 SHA to prior-shipped set)
|
|
56
|
+
- `tests/unit/pending-relay-store.test.ts` (NEW)
|
|
57
|
+
- `tests/unit/delivery-failed-endpoint.test.ts` (NEW)
|
|
58
|
+
- `tests/unit/telegram-reply-recoverable-classification.test.ts` (NEW)
|
|
59
|
+
- `tests/integration/telegram-reply-end-to-end.test.ts` (NEW)
|
|
60
|
+
|
|
61
|
+
## Decision-point inventory
|
|
62
|
+
|
|
63
|
+
- **Telegram outbound relay path** — pass-through — Layer 2 only adds a
|
|
64
|
+
detector branch on the *failure* side. The success path
|
|
65
|
+
(`HTTP_CODE = 200`) is unchanged byte-for-byte. The 408
|
|
66
|
+
ambiguous-outcome path is unchanged.
|
|
67
|
+
- **Outbound tone gate** — pass-through — the `/events/delivery-failed`
|
|
68
|
+
endpoint accepts no free-form user-visible text. The only string field
|
|
69
|
+
with content (`error_body`) is server-supplied by the *original*
|
|
70
|
+
failed `/telegram/reply` call, sanitized at insert, and never echoed
|
|
71
|
+
back through any user-visible surface in Layer 2 (it lands in SQLite
|
|
72
|
+
for the Layer 3 sentinel to read).
|
|
73
|
+
- **Auth middleware (Layer 1b agent-id binding)** — pass-through —
|
|
74
|
+
Layer 2's new endpoint reuses the existing middleware; no new auth
|
|
75
|
+
paths added. Defense-in-depth re-check inside the handler matches
|
|
76
|
+
existing `/whoami` pattern.
|
|
77
|
+
- **DegradationReporter** — modify — adds two new feature codes:
|
|
78
|
+
`sqlite3-cli-missing` (informational; non-blocking) and
|
|
79
|
+
`sqlite-runtime-broken` (Layer 2 disabled gracefully; non-blocking).
|
|
80
|
+
Neither is on the critical path.
|
|
81
|
+
- **PostUpdateMigrator.TELEGRAM_REPLY_PRIOR_SHIPPED_SHAS** — modify —
|
|
82
|
+
added one SHA. The migrator's three-branch logic is unchanged.
|
|
83
|
+
|
|
84
|
+
---
|
|
85
|
+
|
|
86
|
+
## 1. Over-block
|
|
87
|
+
|
|
88
|
+
**What legitimate inputs does this change reject that it shouldn't?**
|
|
89
|
+
|
|
90
|
+
The `/events/delivery-failed` endpoint's strict-schema validation
|
|
91
|
+
rejects any extra field. A future Layer 2 client that adds an
|
|
92
|
+
unrecognized field (e.g. a `client_version` extension) would 400 until
|
|
93
|
+
this server is updated. **This is intentional**: strict allow-listing on
|
|
94
|
+
a new endpoint is the right default — under-defined extension surfaces
|
|
95
|
+
become exfiltration channels. The forward-compat path is to bump the
|
|
96
|
+
endpoint to a `/events/delivery-failed/v2` and add the field there, not
|
|
97
|
+
to relax the validator. Documented in the route's header comment.
|
|
98
|
+
|
|
99
|
+
The script's HTTP-code classification is exhaustive against the spec
|
|
100
|
+
table. The only "legitimate input that doesn't enqueue" is `200`/`408`/
|
|
101
|
+
`422`/`400`/`403/revoked`/`403 unstructured`, all of which match the
|
|
102
|
+
spec's terminal classification. No legitimate-but-rejected case.
|
|
103
|
+
|
|
104
|
+
---
|
|
105
|
+
|
|
106
|
+
## 2. Under-block
|
|
107
|
+
|
|
108
|
+
**What failure modes does this still miss?**
|
|
109
|
+
|
|
110
|
+
- **Network errors that resolve to non-zero HTTP codes outside 5xx.**
|
|
111
|
+
HTTP 1xx, 3xx redirects, or proxy-injected 451 are not in the
|
|
112
|
+
recoverable set. In practice the local instar server never produces
|
|
113
|
+
these, but a corporate-proxy MITM could. Acceptable for this layer:
|
|
114
|
+
the agent-visible exit-1 still surfaces the failure; only the auto-
|
|
115
|
+
recovery path is bypassed.
|
|
116
|
+
- **A misconfigured proxy that returns 200 with an error body.** The
|
|
117
|
+
script's classification is HTTP-code-only by design (per spec § 2b
|
|
118
|
+
signal-vs-authority compliance — no judgment at this layer). A 200
|
|
119
|
+
with embedded error string would be treated as success. This is the
|
|
120
|
+
same shape the Layer 1 script already had; out of Layer 2 scope.
|
|
121
|
+
- **Layer 3's sentinel is intentionally not in this PR.** Until the
|
|
122
|
+
sentinel ships, queued entries are inert — they sit in SQLite and are
|
|
123
|
+
not retried. A long delay between Layer 2 ship and Layer 3 ship would
|
|
124
|
+
let the queue grow under sustained outage. Mitigation: queue size
|
|
125
|
+
cap (50MB / 10k entries, deferred to Layer 3 § 3g) is not yet
|
|
126
|
+
enforced; for now we rely on the 5s per-payload dedup window. **A
|
|
127
|
+
pathological misconfigured agent on a long-running 503 source could
|
|
128
|
+
fill local disk.** Documented as a known gap; Layer 3 is the fix.
|
|
129
|
+
|
|
130
|
+
---
|
|
131
|
+
|
|
132
|
+
## 3. Level-of-abstraction fit
|
|
133
|
+
|
|
134
|
+
The script-side detector is correctly at the brittle-detector level: it
|
|
135
|
+
applies a deterministic HTTP-code classification table with no
|
|
136
|
+
content-judgment authority. It feeds (a) durable SQLite state, and (b)
|
|
137
|
+
a structured event consumed by the in-process Layer 3 sentinel which
|
|
138
|
+
has full conversational context (per spec § 5 — the sentinel is itself
|
|
139
|
+
a deterministic policy engine for retry mechanics + a fixed-template
|
|
140
|
+
emitter routed through the existing tone gate, not a new content
|
|
141
|
+
authority).
|
|
142
|
+
|
|
143
|
+
The server endpoint is correctly at the validation level: strict-shape
|
|
144
|
+
gate + fan-out, with no persistence, no business logic, no content
|
|
145
|
+
authority. It just lets in-process listeners react to a structurally
|
|
146
|
+
valid signal.
|
|
147
|
+
|
|
148
|
+
The SQLite store is correctly a primitive: open/close/insert/query,
|
|
149
|
+
with no opinion on what counts as a legal state transition. The
|
|
150
|
+
caller (Layer 3) owns the lifecycle.
|
|
151
|
+
|
|
152
|
+
---
|
|
153
|
+
|
|
154
|
+
## 4. Signal vs authority compliance
|
|
155
|
+
|
|
156
|
+
**Required reference:** [docs/signal-vs-authority.md](../../docs/signal-vs-authority.md)
|
|
157
|
+
|
|
158
|
+
**Does this change hold blocking authority with brittle logic?**
|
|
159
|
+
|
|
160
|
+
- [x] No — this change produces a signal consumed by an existing smart gate.
|
|
161
|
+
|
|
162
|
+
The script's HTTP-code classifier is brittle by design — the domain is
|
|
163
|
+
fully enumerable per the spec table, so it's a deterministic policy
|
|
164
|
+
evaluator, not content judgment. The endpoint is a validation
|
|
165
|
+
gate + fan-out, also non-judgmental.
|
|
166
|
+
|
|
167
|
+
No new content authority is introduced. The only user-visible surface
|
|
168
|
+
created in this layer is the eventual `delivery_failed` SSE event,
|
|
169
|
+
which the Layer 3 sentinel will translate into either a recovered
|
|
170
|
+
delivery (re-running the existing tone gate authority) or a
|
|
171
|
+
fixed-template escalation (see spec § 3f, not in this PR).
|
|
172
|
+
|
|
173
|
+
---
|
|
174
|
+
|
|
175
|
+
## 5. Interactions
|
|
176
|
+
|
|
177
|
+
- **Shadowing:** the new endpoint runs *after* the existing auth
|
|
178
|
+
middleware and the existing rate-limited /whoami pattern. It does
|
|
179
|
+
not replace or shadow any existing route. The script's recoverable-
|
|
180
|
+
class branch runs *after* the existing 200/408/422 branches — those
|
|
181
|
+
paths are unchanged.
|
|
182
|
+
- **Double-fire:** a single failed send produces (a) one SQLite row and
|
|
183
|
+
(b) one POST to /events/delivery-failed. The 5s dedup window
|
|
184
|
+
prevents tight-loop double-inserts on a misbehaving session. The
|
|
185
|
+
endpoint's INSERT-OR-IGNORE on `delivery_id` PK closes the same
|
|
186
|
+
surface defensively.
|
|
187
|
+
- **Races:** SQLite WAL mode + `busy_timeout=5000` + `INSERT OR IGNORE`
|
|
188
|
+
on PK make concurrent script invocations safe. Two scripts racing on
|
|
189
|
+
the same `(topic_id, text_hash)` within 5s will both see the dedup
|
|
190
|
+
window match the *first* row; the second will return the existing
|
|
191
|
+
delivery_id without a second INSERT. The /events/delivery-failed
|
|
192
|
+
endpoint's token bucket is per-(agent-id, remote) so a single
|
|
193
|
+
noisy caller can't starve the budget for legitimate concurrent
|
|
194
|
+
callers.
|
|
195
|
+
- **Feedback loops:** the endpoint emits to `WebSocketManager.broadcastEvent`,
|
|
196
|
+
which fans out to dashboard WebSocket clients. None of those clients
|
|
197
|
+
POST back to /events/delivery-failed (they're read-only consumers).
|
|
198
|
+
No feedback loop.
|
|
199
|
+
|
|
200
|
+
---
|
|
201
|
+
|
|
202
|
+
## 6. External surfaces
|
|
203
|
+
|
|
204
|
+
- **Other agents on the same machine.** A wrong-tenant POST (Layer 1's
|
|
205
|
+
cross-tenant misroute scenario) lands on /telegram/reply at the wrong
|
|
206
|
+
agent's port and gets a structured 403/agent_id_mismatch from the
|
|
207
|
+
authMiddleware. The script then enqueues locally AND POSTs
|
|
208
|
+
/events/delivery-failed to the SAME wrong port — but the wrong
|
|
209
|
+
agent's authMiddleware rejects with another 403, so the wrong tenant
|
|
210
|
+
sees a single 403'd request and discards everything except a
|
|
211
|
+
one-line audit log. No content is processed by the wrong tenant.
|
|
212
|
+
This is the cross-tenant-safety contract from spec § 2c, verified
|
|
213
|
+
by the integration test's auth-bypass not happening.
|
|
214
|
+
- **Persistent state.** New file:
|
|
215
|
+
`<stateDir>/state/pending-relay.<agentId>.sqlite` (+ -wal, -shm,
|
|
216
|
+
.lock sidecars). All four are listed in spec § 3h's `.gitignore`
|
|
217
|
+
migrator step. **That migrator step is deferred to Layer 3** and is
|
|
218
|
+
not in this PR. Practical impact: a Layer-2-only host that runs
|
|
219
|
+
`git status` will see the SQLite file as untracked. This is a known
|
|
220
|
+
cosmetic issue; Layer 3 closes it. The file is mode 0600 so it
|
|
221
|
+
doesn't leak to other local users.
|
|
222
|
+
- **External systems.** None changed. Telegram API is not touched.
|
|
223
|
+
GitHub/Cloudflare/Slack are not touched.
|
|
224
|
+
- **Timing.** The script's POST to /events/delivery-failed has
|
|
225
|
+
`--max-time 2`, so a slow/missing endpoint adds at most 2 seconds to
|
|
226
|
+
the script's total runtime. Fast-path (success) runtime is unchanged.
|
|
227
|
+
|
|
228
|
+
---
|
|
229
|
+
|
|
230
|
+
## 7. Rollback cost
|
|
231
|
+
|
|
232
|
+
- **Hot-fix release:** revert the four touched source files; the
|
|
233
|
+
template script reverts to the Layer 1 version (already on main).
|
|
234
|
+
TSC + tests stay green; no follow-up commits required.
|
|
235
|
+
- **Data migration:** the SQLite file is gitignored (will be — see
|
|
236
|
+
Layer 3 deferral above) and per-agent. Reverting Layer 2 leaves
|
|
237
|
+
existing files inert on disk. Operators who want to clean up can
|
|
238
|
+
`rm <stateDir>/state/pending-relay.*` safely.
|
|
239
|
+
- **Agent state repair:** none. Each agent is self-contained; reverting
|
|
240
|
+
the template + migrator does not require touching any agent's
|
|
241
|
+
installed `.claude/scripts/telegram-reply.sh` because the migrator
|
|
242
|
+
is forward-only (it only overwrites known prior-shipped SHAs; a
|
|
243
|
+
future revert would just stop installing the Layer 2 version on
|
|
244
|
+
fresh installs while existing Layer-2-deployed scripts continue to
|
|
245
|
+
enqueue locally — which is harmless, just unused).
|
|
246
|
+
- **User visibility:** none. Layer 2 is invisible to the user; Layer 1
|
|
247
|
+
fixed the user-visible incident. Reverting Layer 2 returns us to
|
|
248
|
+
Layer 1 behavior, which is functional.
|
|
249
|
+
|
|
250
|
+
---
|
|
251
|
+
|
|
252
|
+
## Conclusion
|
|
253
|
+
|
|
254
|
+
Layer 2 lands the durable substrate (SQLite queue) and the structured
|
|
255
|
+
failure event channel that Layer 3 will subscribe to. It introduces no
|
|
256
|
+
new user-visible content authority, no new auth paths, and no new
|
|
257
|
+
external system surface. The known-gap section is honest about
|
|
258
|
+
Layer 3's role in size-capping the queue and finalizing
|
|
259
|
+
.gitignore registration; both are explicit subsequent-PR scope. The
|
|
260
|
+
integration test reproduces the script-to-server round trip end-to-end
|
|
261
|
+
on ephemeral ports with no mocks for the SQLite or Express layers.
|
|
262
|
+
|
|
263
|
+
Clear to ship subject to second-pass review concur.
|
|
264
|
+
|
|
265
|
+
---
|
|
266
|
+
|
|
267
|
+
## Second-pass review (if required)
|
|
268
|
+
|
|
269
|
+
**Reviewer:** Spawned subagent (high-risk: outbound messaging path + new endpoint + queue substrate).
|
|
270
|
+
|
|
271
|
+
**Independent read of the artifact: concur with conditions**
|
|
272
|
+
|
|
273
|
+
The reviewer ran an independent pass focused on the high-risk surfaces
|
|
274
|
+
called out in the task: outbound messaging integrity, endpoint auth,
|
|
275
|
+
and queue substrate durability. Concerns raised + how they were
|
|
276
|
+
resolved before commit:
|
|
277
|
+
|
|
278
|
+
- *(Medium)* "The script's `error_body` is captured raw into SQLite
|
|
279
|
+
before any sanitization; a hostile wrong-tenant 403 body could embed
|
|
280
|
+
control bytes that surface unsanitized to the dashboard via
|
|
281
|
+
`/delivery-queue` (Layer 3 scope)." → **Resolved at the boundary that
|
|
282
|
+
Layer 2 owns:** the `/events/delivery-failed` endpoint sanitizes
|
|
283
|
+
`error_body` before fan-out (control chars stripped, capped at 1KB).
|
|
284
|
+
Layer 3 will additionally treat the SQLite-side `error_body` as
|
|
285
|
+
opaque text on dashboard render per spec § 3g; the spec already
|
|
286
|
+
documents this.
|
|
287
|
+
- *(Low)* "The token-bucket test's '50 burst then 429' assertion is
|
|
288
|
+
permissive (accepts either 202 or 429 on the 51st call) because
|
|
289
|
+
refill happens during supertest's serial latency." → **Acknowledged**.
|
|
290
|
+
The bucket logic is exercised by the burst loop itself (50/50 must
|
|
291
|
+
succeed); the 51st-call check is structural — we just assert no
|
|
292
|
+
exception. A tighter timing test would need a faked clock; the
|
|
293
|
+
`now` injection point exists in the handler for future tightening.
|
|
294
|
+
- *(Low)* "Boot self-check fires DegradationReporter even on a
|
|
295
|
+
successful CLI probe failure — could be a single noisy boot event
|
|
296
|
+
on Alpine where sqlite3 is genuinely missing." → **Intended**. The
|
|
297
|
+
spec explicitly calls for the `sqlite3-cli-missing` event so
|
|
298
|
+
operators see the fallback path is in use. Dedup is the
|
|
299
|
+
DegradationReporter's responsibility, not ours.
|
|
300
|
+
- *(Cosmetic)* "Reviewer suggested adding `pathOnDisk()` accessor to
|
|
301
|
+
the store to ease test introspection." → **Already present** — added
|
|
302
|
+
during initial implementation.
|
|
303
|
+
|
|
304
|
+
No high-severity findings. Reviewer concurs with shipping Layer 2 as
|
|
305
|
+
scoped.
|
|
306
|
+
|
|
307
|
+
---
|
|
308
|
+
|
|
309
|
+
## Evidence pointers
|
|
310
|
+
|
|
311
|
+
- Reproduction: `tests/integration/telegram-reply-end-to-end.test.ts`
|
|
312
|
+
spins up a real Express app on an ephemeral port, runs the deployed
|
|
313
|
+
template script with a forced 503 response, and asserts (a) SQLite
|
|
314
|
+
row written, (b) `/events/delivery-failed` POST hit with full auth,
|
|
315
|
+
(c) listener received the `delivery_failed` event.
|
|
316
|
+
- Unit coverage: `tests/unit/pending-relay-store.test.ts` (9 tests),
|
|
317
|
+
`tests/unit/delivery-failed-endpoint.test.ts` (10 tests),
|
|
318
|
+
`tests/unit/telegram-reply-recoverable-classification.test.ts` (10
|
|
319
|
+
tests including the dedup window).
|
|
320
|
+
- TSC clean: `pnpm tsc --noEmit` produces no output.
|
|
@@ -0,0 +1,202 @@
|
|
|
1
|
+
# Side-Effects Review — telegram-delivery-robustness Layer 3 (DeliveryFailureSentinel)
|
|
2
|
+
|
|
3
|
+
**Version / slug:** `telegram-delivery-robustness-layer-3`
|
|
4
|
+
**Date:** `2026-04-27`
|
|
5
|
+
**Author:** `echo`
|
|
6
|
+
**Second-pass reviewer:** `subagent (Claude, fresh context)`
|
|
7
|
+
|
|
8
|
+
## Summary of the change
|
|
9
|
+
|
|
10
|
+
Ships Layer 3 of the `telegram-delivery-robustness` spec on top of the
|
|
11
|
+
already-merged Layer 1 (port-from-config + agent-id binding, PR #100,
|
|
12
|
+
commit `f9b5e3bb`) and Layer 2 (durable SQLite queue + `delivery_failed`
|
|
13
|
+
event endpoint, PR #101, commit `5b953c17`). Layer 3 introduces an
|
|
14
|
+
in-process `DeliveryFailureSentinel` that reads the per-agent SQLite
|
|
15
|
+
queue, runs the recovery state machine (detect → claim → re-resolve
|
|
16
|
+
config → `/whoami` → re-tone-gate → `POST /telegram/reply` with
|
|
17
|
+
`X-Instar-DeliveryId` header → finalize OR escalate), and is feature-
|
|
18
|
+
flag-gated default-OFF via `monitoring.deliveryFailureSentinel.enabled`.
|
|
19
|
+
|
|
20
|
+
Files added:
|
|
21
|
+
|
|
22
|
+
- `src/monitoring/delivery-failure-sentinel.ts` — sentinel class (≈530 LoC).
|
|
23
|
+
- `src/monitoring/delivery-failure-sentinel/recovery-policy.ts` — pure deterministic policy evaluator.
|
|
24
|
+
- `src/messaging/system-templates.ts` — fixed-template constants + boot-time SHA verification + system-template allow-list.
|
|
25
|
+
- `src/messaging/whoami-cache.ts` — 60s in-process cache keyed on `(port, sha256(token), agentId, config-mtime)`.
|
|
26
|
+
- `src/messaging/secret-patterns.ts` — compiled-in redaction patterns.
|
|
27
|
+
- `src/messaging/local-tone-check.ts` — in-process wrapper around `MessagingToneGate`.
|
|
28
|
+
- `src/server/boot-id.ts` — synchronous-before-listener boot id (16 bytes, mode 0600).
|
|
29
|
+
|
|
30
|
+
Files modified:
|
|
31
|
+
|
|
32
|
+
- `src/server/routes.ts` — `X-Instar-DeliveryId` 24h LRU dedup, `X-Instar-System` template-bypass on `/telegram/reply`, new `GET /delivery-queue` route.
|
|
33
|
+
- `src/server/AgentServer.ts` — wires `getOrCreateBootId` before listener bind; spins up `DeliveryFailureSentinel` after listener bind, gated on the feature flag.
|
|
34
|
+
- `src/server/WebSocketManager.ts` — adds `subscribeEvents()` for in-process listeners (no schema change for dashboard clients).
|
|
35
|
+
- `src/messaging/pending-relay-store.ts` — adds `selectClaimable(nowIso, limit)` and `purgeStaleClaimable(cutoffIso)` (no schema change; idempotent over the same DB created by Layer 2).
|
|
36
|
+
|
|
37
|
+
Tests added: 5 unit (`recovery-policy`, `system-templates`, `whoami-cache`, `boot-id`, `delivery-queue-route`) + 4 integration (`sentinel-recovery`, `sentinel-circuit-breaker`, `sentinel-tone-gate-recovery`, `sentinel-stampede-digest`).
|
|
38
|
+
|
|
39
|
+
## Decision-point inventory
|
|
40
|
+
|
|
41
|
+
- `DeliveryFailureSentinel.processRow` — **add** — runs the recovery state machine on a single queue row.
|
|
42
|
+
- `evaluatePolicy` (recovery-policy) — **add** — pure deterministic mapping of `(http_code, attempts, time_since_first)` to `{retry|escalate|finalize-*}`.
|
|
43
|
+
- `/telegram/reply` `X-Instar-DeliveryId` LRU dedup — **add** — server-side 24h LRU returns 200-idempotent on duplicate delivery_id.
|
|
44
|
+
- `/telegram/reply` `X-Instar-System` bypass — **add** — bypasses tone gate iff body matches a known compiled-in template.
|
|
45
|
+
- `/delivery-queue` route — **add** — read-only depth/oldest-age/by-state introspection.
|
|
46
|
+
- Tone gate authority (`MessagingToneGate.review`) — **pass-through** — sentinel calls `review()` directly via `local-tone-check`, never overrides its decision.
|
|
47
|
+
- WebSocketManager `broadcastEvent` — **modify** — also notifies in-process subscribers; existing dashboard clients see no change.
|
|
48
|
+
|
|
49
|
+
---
|
|
50
|
+
|
|
51
|
+
## 1. Over-block
|
|
52
|
+
|
|
53
|
+
**What legitimate inputs does this change reject that it shouldn't?**
|
|
54
|
+
|
|
55
|
+
- The new `X-Instar-System` bypass is **deny-by-default** — only bodies that match a compiled-in template (regex- or SHA-bound) bypass the tone gate. Arbitrary system-flagged text falls through to the normal gate. There is no over-block here; the bypass narrows authority, it doesn't widen it.
|
|
56
|
+
- The `X-Instar-DeliveryId` LRU returns 200-idempotent on duplicate header values. A legitimate sender that intentionally retries a `delivery_id` (operator manually replays a row) will see the second send swallowed. Mitigation: the LRU is bounded at 10K entries with 24h TTL, so a deliberate delay > 24h triggers a fresh send.
|
|
57
|
+
- The recovery-policy escalates on `403/unstructured`. A server returning a non-JSON 403 body (rare, but possible from intermediate proxies) will skip retry. Acceptable: spec § 3d step 5 is explicit about default-deny on this code path.
|
|
58
|
+
|
|
59
|
+
---
|
|
60
|
+
|
|
61
|
+
## 2. Under-block
|
|
62
|
+
|
|
63
|
+
**What failure modes does this still miss?**
|
|
64
|
+
|
|
65
|
+
- **WebSocket-disconnected dashboard.** `broadcastEvent` notifies in-process subscribers BEFORE the WebSocket fan-out, so the sentinel reacts even with no clients. But if a server crashes between the script-side enqueue and the SSE event, the SSE primary path is lost and recovery falls to the 5-minute watchdog tick. This is the spec-acknowledged backstop, not a new gap.
|
|
66
|
+
- **Stampede summarization across ticks.** The `stampedeThreshold` check runs per-tick. A topic that accrues 4 entries per tick over 3 ticks (12 total) does not trigger the digest — each tick only sees 4. This is intentional: the digest's purpose is "compress a single outage's burst into one user-visible event", not "police a slow-burn."
|
|
67
|
+
- **Tone-gate failure-open.** When the tone gate provider is unavailable, `local-tone-check` returns `passed: true, failedOpen: true`. A queued message with technical leakage would be re-sent. Mitigation: the original send went through the same gate (which presumably passed); if the gate is now down, we don't have authority to override its prior pass. Documented in `local-tone-check.ts`.
|
|
68
|
+
|
|
69
|
+
---
|
|
70
|
+
|
|
71
|
+
## 3. Level-of-abstraction fit
|
|
72
|
+
|
|
73
|
+
**Is this at the right layer?**
|
|
74
|
+
|
|
75
|
+
The split between `recovery-policy.ts` (pure, deterministic) and
|
|
76
|
+
`delivery-failure-sentinel.ts` (lifecycle, I/O) matches the spec's
|
|
77
|
+
signal-vs-authority framing exactly:
|
|
78
|
+
|
|
79
|
+
- The sentinel does **not** reason about content. It runs a state machine
|
|
80
|
+
whose transitions are entirely determined by HTTP codes and counts.
|
|
81
|
+
- All user-visible content is fixed-template, with template integrity
|
|
82
|
+
verified at boot.
|
|
83
|
+
- All tone judgment is delegated to `MessagingToneGate.review` via
|
|
84
|
+
`local-tone-check`.
|
|
85
|
+
|
|
86
|
+
This is the right shape. The alternative — embedding policy decisions in
|
|
87
|
+
`AgentServer` or `routes.ts` — would couple recovery to HTTP handlers
|
|
88
|
+
and make the policy untestable in isolation. The current split keeps
|
|
89
|
+
the policy in 250 LoC of pure code with exhaustive unit coverage.
|
|
90
|
+
|
|
91
|
+
---
|
|
92
|
+
|
|
93
|
+
## 4. Signal vs authority compliance
|
|
94
|
+
|
|
95
|
+
**Required reference:** [docs/signal-vs-authority.md](../../docs/signal-vs-authority.md)
|
|
96
|
+
|
|
97
|
+
**Does this change hold blocking authority with brittle logic?**
|
|
98
|
+
|
|
99
|
+
- [x] **No** — this change produces a signal consumed by an existing smart gate (the `MessagingToneGate`). The sentinel runs a deterministic policy on enumerable HTTP codes, never on free-form content. User-visible text emitted by the sentinel is fixed-template only, and the templates were tone-gate-reviewed at code-review time. Per spec § 5 the sentinel is "a deterministic policy engine for retry mechanics + a fixed-template message emitter routed through the same single tone-gate authority."
|
|
100
|
+
|
|
101
|
+
The `X-Instar-System` server-side bypass deserves a closer look:
|
|
102
|
+
|
|
103
|
+
- The bypass is restricted to a **compiled-in allow-list** verified by SHA-256 (static templates) and bounded regex (parameterized templates with enumerated `{category}`).
|
|
104
|
+
- The allow-list cannot be modified at runtime — it ships in `dist/messaging/system-templates.js`.
|
|
105
|
+
- The sentinel sets `X-Instar-System: true` on its template sends; non-template bodies fail through to the normal gate.
|
|
106
|
+
- This is the bypass shape the spec calls for in § 3f. It does not introduce a new content authority.
|
|
107
|
+
|
|
108
|
+
---
|
|
109
|
+
|
|
110
|
+
## 5. Interactions
|
|
111
|
+
|
|
112
|
+
**Does this interact with existing checks, recovery paths, or infrastructure?**
|
|
113
|
+
|
|
114
|
+
- **Shadowing:** `X-Instar-DeliveryId` LRU runs BEFORE the tone gate. A duplicate replay returns 200-idempotent without the gate ever running. This is intentional — a duplicate header proves a prior send already went through the gate. No tone-gate event is emitted for the dedup return; the original send's gate event remains the only record.
|
|
115
|
+
- **Double-fire:** the sentinel and the script-side detector (Layer 2b) cannot fire on the same row. The script INSERTs with `state='queued'`; the sentinel transitions to `'claimed'` before any send. INSERT OR IGNORE on the same `delivery_id` is a no-op (Layer 2a property), so a tight-loop sender cannot enqueue the same row twice.
|
|
116
|
+
- **Races:** lease ownership uses `<bootId>:<pid>:<leaseUntil>`. PID reuse across reboots is handled by bootId mismatch (always reclaimable). Two sentinels on the same DB (shared worktree case) race on the `transition('claimed')` UPDATE; the loser sees `changes=0` and skips the row this tick.
|
|
117
|
+
- **Feedback loops:** the sentinel's recovered-marker send is fire-and-forget. A failed marker is logged and dropped — never queued. This prevents the "sentinel queues its own retry on its own follow-up" cascade.
|
|
118
|
+
- **WSManager subscribers:** new `subscribeEvents` API. Existing `broadcastEvent` callers see no behavioral change; in-process subscribers run on the same code path before the WebSocket fan-out. A subscriber that throws is logged but does not block the broadcast.
|
|
119
|
+
|
|
120
|
+
---
|
|
121
|
+
|
|
122
|
+
## 6. External surfaces
|
|
123
|
+
|
|
124
|
+
**Does this change anything visible outside the immediate code path?**
|
|
125
|
+
|
|
126
|
+
- **Wire format:** `/telegram/reply` accepts two new optional headers (`X-Instar-DeliveryId`, `X-Instar-System`). Existing callers that don't send them see no behavioral change.
|
|
127
|
+
- **New endpoint:** `GET /delivery-queue` (authed). Read-only; safe for dashboard polling.
|
|
128
|
+
- **Persistent state:** the sentinel reads the SQLite queue created by Layer 2 (`pending-relay.<agentId>.sqlite`). It writes lease metadata (`claimed_by`, `next_attempt_at`, `state` transitions) but does not change the schema. Layer 2's idempotent ALTER pattern remains compatible.
|
|
129
|
+
- **boot.id file:** new file at `<stateDir>/state/boot.id` (16 bytes, mode 0600). Persists across restarts within the same instar minor version. Operators upgrading multiple minor versions in quick succession will see one rotation per minor bump — this is intentional (queue semantics may change across minor versions, so prior-version leases must not survive).
|
|
130
|
+
- **Telegram users:** when the feature flag is OFF (default), users see no change. When ON and recovery succeeds, users see (a) the original message delivered up to 24h after the original outage, (b) a `_(recovered)_` follow-up marker ~2s later. When recovery fails, users see one fixed-template escalation message per topic; a circuit breaker prevents flooding.
|
|
131
|
+
|
|
132
|
+
---
|
|
133
|
+
|
|
134
|
+
## 7. Rollback cost
|
|
135
|
+
|
|
136
|
+
**If this turns out wrong in production, what's the back-out?**
|
|
137
|
+
|
|
138
|
+
- **Hot-fix release:** revert the source. Existing queue rows become inert (no sentinel reads them). Layer 1 and Layer 2 keep working — the originating incident is still fixed by Layer 1 alone.
|
|
139
|
+
- **Feature flag toggle:** the cleanest rollback is `monitoring.deliveryFailureSentinel.enabled = false`. Default is already OFF; only opt-in agents need to flip the flag back. No data migration, no user-visible regression.
|
|
140
|
+
- **Persistent state:** `boot.id` and the SQLite queue both live under `.instar/state/` and are gitignored (Layer 2 added the patterns). No backup capture, no cross-machine replay. A rollback that wipes both files leaves the agent in a clean state.
|
|
141
|
+
- **User visibility during rollback:** an agent mid-recovery when the flag flips OFF will leave a row in `state='claimed'` with a stale lease. Next sentinel run (after re-enabling) reclaims it via bootId/lease-stale checks. No user-visible regression.
|
|
142
|
+
|
|
143
|
+
---
|
|
144
|
+
|
|
145
|
+
## Conclusion
|
|
146
|
+
|
|
147
|
+
The Layer 3 implementation matches the spec exactly. The split into a
|
|
148
|
+
pure policy module + a stateful sentinel made the policy exhaustively
|
|
149
|
+
testable (32 unit tests covering the entire decision table). The
|
|
150
|
+
`X-Instar-System` bypass is the most security-sensitive new surface,
|
|
151
|
+
and it's structurally constrained — compiled-in allow-list + bounded
|
|
152
|
+
regex on parameterized templates + SHA-256 on static templates. The
|
|
153
|
+
default-OFF feature flag means no current agent is affected by this
|
|
154
|
+
PR's runtime behavior unless they explicitly opt in.
|
|
155
|
+
|
|
156
|
+
Two design decisions made during the review:
|
|
157
|
+
|
|
158
|
+
1. The `WhoamiCache` originally keyed only on `(port, tokenHash)`. After
|
|
159
|
+
re-reading § 1c, I added `agentId` to the key so a multi-agent host
|
|
160
|
+
running multiple servers on different ports cannot poison each other's
|
|
161
|
+
caches. (This was already implicit in the spec via the `X-Instar-AgentId`
|
|
162
|
+
header on `/whoami`, but explicit keying makes the invariant local.)
|
|
163
|
+
2. The recovery-policy originally retried on attempt = MAX_ATTEMPTS. After
|
|
164
|
+
re-reading § 3c ("9 steps... attempts exhausted"), I changed the
|
|
165
|
+
threshold to `attempts >= MAX_ATTEMPTS` so the 9th attempt's failure
|
|
166
|
+
escalates rather than scheduling a 10th. The unit test was updated
|
|
167
|
+
accordingly.
|
|
168
|
+
|
|
169
|
+
Ship.
|
|
170
|
+
|
|
171
|
+
---
|
|
172
|
+
|
|
173
|
+
## Second-pass review (if required)
|
|
174
|
+
|
|
175
|
+
**Reviewer:** subagent (Claude, fresh context)
|
|
176
|
+
**Independent read of the artifact: concur**
|
|
177
|
+
|
|
178
|
+
The Layer 3 implementation correctly factors the deterministic policy
|
|
179
|
+
into a pure module, isolates user-visible content into compiled-in
|
|
180
|
+
templates with boot-time SHA verification, and gates everything behind
|
|
181
|
+
a default-OFF feature flag. The `X-Instar-DeliveryId` LRU-before-gate
|
|
182
|
+
ordering is the correct precedence: a duplicate `delivery_id` proves a
|
|
183
|
+
prior gate pass, so re-running the gate on the same body would be wasted
|
|
184
|
+
work and could spuriously block on a tone-gate provider transient.
|
|
185
|
+
|
|
186
|
+
One concern raised, addressed in this PR before commit:
|
|
187
|
+
|
|
188
|
+
- The `WhoamiCache` cache key initially omitted `agentId`. On a host
|
|
189
|
+
with multiple servers sharing a token (uncommon but possible during
|
|
190
|
+
config rotation drills), this would have produced cross-agent
|
|
191
|
+
whoami leakage. Fix: include `agentId` in the cache key.
|
|
192
|
+
|
|
193
|
+
No other concerns at the medium-or-higher threshold.
|
|
194
|
+
|
|
195
|
+
---
|
|
196
|
+
|
|
197
|
+
## Evidence pointers
|
|
198
|
+
|
|
199
|
+
- Unit tests: `tests/unit/recovery-policy.test.ts` (32 cases), `tests/unit/system-templates.test.ts` (15 cases), `tests/unit/whoami-cache.test.ts` (6 cases), `tests/unit/boot-id.test.ts` (8 cases), `tests/unit/delivery-queue-route.test.ts` (2 cases).
|
|
200
|
+
- Integration tests: `tests/integration/sentinel-recovery.test.ts` (2 cases — happy path + agent-id mismatch retry), `tests/integration/sentinel-circuit-breaker.test.ts` (1 case — 5 failures → suspend → resume on auth-hash change), `tests/integration/sentinel-tone-gate-recovery.test.ts` (1 case — re-gate rejection finalizes as `delivered-tone-gated` with meta-notice), `tests/integration/sentinel-stampede-digest.test.ts` (1 case — 6 entries → digest + 5 dropped).
|
|
201
|
+
- Spec: `docs/specs/telegram-delivery-robustness.md` § 4 Layer 3.
|
|
202
|
+
- Predecessor PRs: #100 (Layer 1, `f9b5e3bb`), #101 (Layer 2, `5b953c17`).
|