superlocalmemory 3.0.7 → 3.0.10

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -38,8 +38,8 @@ SuperLocalMemory gives AI assistants persistent, structured memory that survives
38
38
 
39
39
  | Metric | Score | Context |
40
40
  |:-------|:-----:|:--------|
41
+ | LoCoMo (Mode C, full power) | **87.7%** | On conv-30, 81 scored questions |
41
42
  | LoCoMo (Mode A, zero-LLM) | **62.3%** | Highest zero-LLM score. No cloud dependency. |
42
- | LoCoMo (Mode C, full) | **~78%** | Competitive with funded systems ($10M+) |
43
43
  | Math layer improvement | **+12.7pp** | Average gain from mathematical foundations |
44
44
  | Multi-hop improvement | **+12pp** | 50% vs 38% (math on vs off) |
45
45
 
@@ -95,6 +95,16 @@ Query ──► Strategy Classifier ──► 4 Parallel Channels:
95
95
 
96
96
  > All Python dependencies are installed automatically during `npm install`. You don't need to run pip manually. If any dependency fails, the installer shows clear instructions.
97
97
 
98
+ ### What Gets Installed Automatically
99
+
100
+ | Component | Size | When |
101
+ |:----------|:-----|:-----|
102
+ | Core math libraries (numpy, scipy, networkx) | ~50MB | During `npm install` |
103
+ | Search engine (sentence-transformers, einops, torch) | ~200MB | During `npm install` |
104
+ | Embedding model (nomic-ai/nomic-embed-text-v1.5) | ~500MB | On first use OR `slm warmup` |
105
+
106
+ **If any dependency fails during install**, the installer prints the exact `pip install` command to fix it. BM25 keyword search works even without embeddings — you're never fully blocked.
107
+
98
108
  ---
99
109
 
100
110
  ## Quick Start
@@ -107,15 +117,18 @@ npm install -g superlocalmemory
107
117
 
108
118
  This single command:
109
119
  - Installs the V3 engine and CLI
110
- - Auto-installs all Python dependencies (numpy, scipy, networkx, sentence-transformers, etc.)
120
+ - Auto-installs all Python dependencies (numpy, scipy, networkx, sentence-transformers, einops, torch, etc.)
111
121
  - Creates the data directory at `~/.superlocalmemory/`
112
122
  - Detects and guides V2 migration if applicable
113
123
 
114
- Then configure:
124
+ Then configure and pre-download the embedding model:
115
125
  ```bash
116
- slm setup # Choose mode, configure provider
126
+ slm setup # Choose mode, configure provider
127
+ slm warmup # Pre-download embedding model (~500MB, optional)
117
128
  ```
118
129
 
130
+ > **First time?** If you skip `slm warmup`, the model downloads automatically on first `slm remember` or `slm recall`. Either way works.
131
+
119
132
  ### Install via pip
120
133
 
121
134
  ```bash
@@ -159,6 +172,27 @@ Add to your IDE's MCP config:
159
172
 
160
173
  24 MCP tools available: `remember`, `recall`, `search`, `fetch`, `list_recent`, `get_status`, `build_graph`, `switch_profile`, `health`, `consistency_check`, `recall_trace`, and more.
161
174
 
175
+ ### Web Dashboard (17 tabs)
176
+
177
+ ```bash
178
+ slm dashboard # Opens at http://localhost:8765
179
+ ```
180
+
181
+ The V3 dashboard provides real-time visibility into your memory system:
182
+
183
+ - **Dashboard** — Mode switcher, health score, quick store/recall
184
+ - **Recall Lab** — Search with per-channel score breakdown (Semantic, BM25, Entity, Temporal)
185
+ - **Knowledge Graph** — Interactive entity relationship visualization
186
+ - **Memories** — Browse, search, and manage stored memories
187
+ - **Trust Dashboard** — Bayesian trust scores per agent with Beta distribution visualization
188
+ - **Math Health** — Fisher-Rao confidence, Sheaf consistency, Langevin lifecycle state
189
+ - **Compliance** — GDPR export/erasure, EU AI Act status, audit trail
190
+ - **Learning** — Adaptive ranking progress, behavioral patterns, outcome tracking
191
+ - **IDE Connections** — Connected AI tools status and configuration
192
+ - **Settings** — Mode, provider, auto-capture/recall configuration
193
+
194
+ > The dashboard runs locally at `http://localhost:8765`. No data leaves your machine.
195
+
162
196
  ---
163
197
 
164
198
  ## V3 Engine Features
@@ -215,7 +249,7 @@ Evaluated on the [LoCoMo benchmark](https://arxiv.org/abs/2402.09714) (Long Conv
215
249
  | EverMemOS | 92.3% | Yes | No | No |
216
250
  | MemMachine | 91.7% | Yes | No | No |
217
251
  | Hindsight | 89.6% | Yes | No | No |
218
- | **SLM V3 Mode C** | **~78%** | Optional | **Yes** | Partial |
252
+ | **SLM V3 Mode C** | **87.7%** | Optional | **Yes** | Partial |
219
253
  | **SLM V3 Mode A** | **62.3%** | **No** | **Yes** | **Yes** |
220
254
  | Mem0 ($24M) | 34.2% F1 | Yes | Partial | No |
221
255
 
Binary file
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "superlocalmemory",
3
- "version": "3.0.7",
3
+ "version": "3.0.10",
4
4
  "description": "Information-geometric agent memory with mathematical guarantees. 4-channel retrieval, Fisher-Rao similarity, zero-LLM mode, EU AI Act compliant. Works with Claude, Cursor, Windsurf, and 17+ AI tools.",
5
5
  "keywords": [
6
6
  "ai-memory",
package/pyproject.toml CHANGED
@@ -1,6 +1,6 @@
1
1
  [project]
2
2
  name = "superlocalmemory"
3
- version = "3.0.0"
3
+ version = "3.0.10"
4
4
  description = "Information-geometric agent memory with mathematical guarantees"
5
5
  readme = "README.md"
6
6
  license = {text = "MIT"}
@@ -22,11 +22,13 @@ dependencies = [
22
22
  "python-dateutil>=2.9.0.post0",
23
23
  "rank-bm25>=0.2.2",
24
24
  "vadersentiment>=3.3.2",
25
+ "einops>=0.8.2",
25
26
  ]
26
27
 
27
28
  [project.optional-dependencies]
28
29
  search = [
29
30
  "sentence-transformers>=2.5.0,<4.0.0",
31
+ "einops>=0.7.0,<1.0.0",
30
32
  "torch>=2.2.0",
31
33
  "scikit-learn>=1.3.0,<2.0.0",
32
34
  "geoopt>=0.5.0",
@@ -112,15 +112,19 @@ if (pipInstall(coreDeps, 'core')) {
112
112
  }
113
113
 
114
114
  // Search dependencies (IMPORTANT — enables semantic search, 4-channel retrieval)
115
- const searchDeps = ['sentence-transformers>=2.5.0', 'geoopt>=0.5.0'];
115
+ const searchDeps = ['sentence-transformers>=2.5.0', 'einops>=0.7.0', 'geoopt>=0.5.0'];
116
116
 
117
117
  console.log('\nInstalling semantic search engine (downloads ~500MB on first use)...');
118
118
  if (pipInstall(searchDeps, 'search')) {
119
- console.log('✓ Semantic search engine installed (sentence-transformers + Fisher-Rao)');
119
+ console.log('✓ Semantic search engine installed (sentence-transformers + einops + Fisher-Rao)');
120
+ console.log('');
121
+ console.log(' Note: The embedding model (nomic-ai/nomic-embed-text-v1.5, ~500MB)');
122
+ console.log(' will download automatically on first use (slm remember / slm recall).');
123
+ console.log(' To pre-download now, run: slm warmup');
120
124
  } else {
121
125
  console.log('⚠ Semantic search installation failed (BM25 keyword search still works).');
122
126
  console.log(' For full 4-channel retrieval, run:');
123
- console.log(' pip install sentence-transformers geoopt');
127
+ console.log(' pip install sentence-transformers einops geoopt');
124
128
  }
125
129
 
126
130
  // --- Step 4: Detect V2 installation ---
@@ -29,6 +29,8 @@ def dispatch(args: Namespace) -> None:
29
29
  "status": cmd_status,
30
30
  "health": cmd_health,
31
31
  "trace": cmd_trace,
32
+ "warmup": cmd_warmup,
33
+ "dashboard": cmd_dashboard,
32
34
  "profile": cmd_profile,
33
35
  }
34
36
  handler = handlers.get(args.command)
@@ -216,6 +218,78 @@ def cmd_trace(args: Namespace) -> None:
216
218
  print(f" {ch}: {sc:.3f}")
217
219
 
218
220
 
221
+ def cmd_warmup(_args: Namespace) -> None:
222
+ """Pre-download the embedding model so first use is instant."""
223
+ print("Downloading embedding model (nomic-ai/nomic-embed-text-v1.5)...")
224
+ print("This is ~500MB and only needed once.\n")
225
+
226
+ try:
227
+ from superlocalmemory.core.config import EmbeddingConfig
228
+ from superlocalmemory.core.embeddings import EmbeddingService
229
+
230
+ config = EmbeddingConfig()
231
+ svc = EmbeddingService(config)
232
+
233
+ # Force model load (triggers download)
234
+ if svc.is_available:
235
+ # Verify it works
236
+ emb = svc.embed("warmup test")
237
+ if emb and len(emb) == config.dimension:
238
+ print(f"\nModel ready: {config.model_name} ({config.dimension}-dim)")
239
+ print("Semantic search is fully operational.")
240
+ else:
241
+ print("\nModel loaded but embedding verification failed.")
242
+ print("Run: pip install sentence-transformers einops")
243
+ else:
244
+ print("\nModel could not load.")
245
+ print("Install dependencies: pip install sentence-transformers einops torch")
246
+ except ImportError as exc:
247
+ print(f"\nMissing dependency: {exc}")
248
+ print("Install with: pip install sentence-transformers einops torch")
249
+ except Exception as exc:
250
+ print(f"\nWarmup failed: {exc}")
251
+ print("Check your internet connection and try again.")
252
+
253
+
254
+ def cmd_dashboard(args: Namespace) -> None:
255
+ """Launch the web dashboard."""
256
+ try:
257
+ import uvicorn
258
+ except ImportError:
259
+ print("Dashboard requires: pip install 'fastapi[all]' uvicorn")
260
+ sys.exit(1)
261
+
262
+ import socket
263
+
264
+ port = getattr(args, "port", 8765)
265
+
266
+ def _find_port(preferred: int) -> int:
267
+ for p in [preferred] + list(range(preferred + 1, preferred + 20)):
268
+ try:
269
+ with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
270
+ s.bind(("127.0.0.1", p))
271
+ return p
272
+ except OSError:
273
+ continue
274
+ return preferred
275
+
276
+ ui_port = _find_port(port)
277
+ if ui_port != port:
278
+ print(f" Port {port} in use — using {ui_port} instead")
279
+
280
+ print("=" * 60)
281
+ print(" SuperLocalMemory V3 — Web Dashboard")
282
+ print("=" * 60)
283
+ print(f" Dashboard: http://localhost:{ui_port}")
284
+ print(f" API Docs: http://localhost:{ui_port}/api/docs")
285
+ print(" Press Ctrl+C to stop\n")
286
+
287
+ from superlocalmemory.server.ui import create_app
288
+
289
+ app = create_app()
290
+ uvicorn.run(app, host="127.0.0.1", port=ui_port, log_level="info")
291
+
292
+
219
293
  def cmd_profile(args: Namespace) -> None:
220
294
  """Profile management (list, switch, create)."""
221
295
  from superlocalmemory.core.config import SLMConfig
@@ -67,6 +67,15 @@ def main() -> None:
67
67
  trace_p = sub.add_parser("trace", help="Recall with channel breakdown")
68
68
  trace_p.add_argument("query", help="Search query")
69
69
 
70
+ # Warmup (pre-download model)
71
+ sub.add_parser("warmup", help="Pre-download embedding model (~500MB)")
72
+
73
+ # Dashboard
74
+ dashboard_p = sub.add_parser("dashboard", help="Open web dashboard")
75
+ dashboard_p.add_argument(
76
+ "--port", type=int, default=8765, help="Port (default 8765)",
77
+ )
78
+
70
79
  # Profiles
71
80
  profile_p = sub.add_parser("profile", help="Profile management")
72
81
  profile_p.add_argument(
@@ -17,10 +17,13 @@ All route handlers live in routes/ directory:
17
17
  routes/ws.py -- /ws/updates (WebSocket)
18
18
  """
19
19
 
20
+ import logging
20
21
  import sys
21
22
  from pathlib import Path
22
23
  from datetime import datetime
23
24
 
25
+ logger = logging.getLogger(__name__)
26
+
24
27
  _script_dir = str(Path(__file__).parent.resolve())
25
28
  sys.path = [p for p in sys.path if p not in ("", _script_dir)]
26
29
 
@@ -193,7 +196,20 @@ def create_app() -> FastAPI:
193
196
 
194
197
  @application.on_event("startup")
195
198
  async def startup_event():
196
- """Register Event Bus listener for SSE bridge on startup."""
199
+ """Initialize V3 engine and event bus on startup."""
200
+ # Initialize V3 engine for dashboard API routes
201
+ try:
202
+ from superlocalmemory.core.config import SLMConfig
203
+ from superlocalmemory.core.engine import MemoryEngine
204
+ config = SLMConfig.load()
205
+ engine = MemoryEngine(config)
206
+ engine.initialize()
207
+ application.state.engine = engine
208
+ logger.info("V3 engine initialized for dashboard")
209
+ except Exception as exc:
210
+ logger.warning("V3 engine init failed: %s (V3 API routes will be unavailable)", exc)
211
+ application.state.engine = None
212
+
197
213
  register_event_listener()
198
214
 
199
215
  return application
@@ -329,6 +329,32 @@ class V2Migrator:
329
329
  except Exception as exc:
330
330
  stats["steps"].append(f"V2 conversion partial: {exc}")
331
331
 
332
+ # Step 4c: Create views for V2 dashboard compatibility
333
+ try:
334
+ v2_views = {
335
+ "graph_nodes": "_v2_bak_graph_nodes",
336
+ "graph_clusters": "_v2_bak_graph_clusters",
337
+ "sessions": "_v2_bak_sessions",
338
+ "memory_events": "_v2_bak_memory_events",
339
+ "identity_patterns": "_v2_bak_identity_patterns",
340
+ "pattern_examples": "_v2_bak_pattern_examples",
341
+ "creator_metadata": "_v2_bak_creator_metadata",
342
+ "agent_registry": "_v2_bak_agent_registry",
343
+ }
344
+ view_count = 0
345
+ for view_name, source in v2_views.items():
346
+ try:
347
+ conn.execute(f'SELECT 1 FROM "{source}" LIMIT 1')
348
+ conn.execute(f'DROP VIEW IF EXISTS "{view_name}"')
349
+ conn.execute(f'CREATE VIEW "{view_name}" AS SELECT * FROM "{source}"')
350
+ view_count += 1
351
+ except Exception:
352
+ pass
353
+ conn.commit()
354
+ stats["steps"].append(f"Created {view_count} V2 compatibility views")
355
+ except Exception:
356
+ pass
357
+
332
358
  conn.close()
333
359
  stats["steps"].append("Created V3 schema")
334
360
 
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: superlocalmemory
3
- Version: 3.0.0
3
+ Version: 3.0.8
4
4
  Summary: Information-geometric agent memory with mathematical guarantees
5
5
  Author-email: Varun Pratap Bhardwaj <admin@superlocalmemory.com>
6
6
  License: MIT
@@ -22,8 +22,10 @@ Requires-Dist: mcp>=1.0.0
22
22
  Requires-Dist: python-dateutil>=2.9.0.post0
23
23
  Requires-Dist: rank-bm25>=0.2.2
24
24
  Requires-Dist: vadersentiment>=3.3.2
25
+ Requires-Dist: einops>=0.8.2
25
26
  Provides-Extra: search
26
27
  Requires-Dist: sentence-transformers<4.0.0,>=2.5.0; extra == "search"
28
+ Requires-Dist: einops<1.0.0,>=0.7.0; extra == "search"
27
29
  Requires-Dist: torch>=2.2.0; extra == "search"
28
30
  Requires-Dist: scikit-learn<2.0.0,>=1.3.0; extra == "search"
29
31
  Requires-Dist: geoopt>=0.5.0; extra == "search"
@@ -83,8 +85,8 @@ SuperLocalMemory gives AI assistants persistent, structured memory that survives
83
85
 
84
86
  | Metric | Score | Context |
85
87
  |:-------|:-----:|:--------|
88
+ | LoCoMo (Mode C, full power) | **87.7%** | On conv-30, 81 scored questions |
86
89
  | LoCoMo (Mode A, zero-LLM) | **62.3%** | Highest zero-LLM score. No cloud dependency. |
87
- | LoCoMo (Mode C, full) | **~78%** | Competitive with funded systems ($10M+) |
88
90
  | Math layer improvement | **+12.7pp** | Average gain from mathematical foundations |
89
91
  | Multi-hop improvement | **+12pp** | 50% vs 38% (math on vs off) |
90
92
 
@@ -130,13 +132,35 @@ Query ──► Strategy Classifier ──► 4 Parallel Channels:
130
132
 
131
133
  ---
132
134
 
135
+ ## Prerequisites
136
+
137
+ | Requirement | Version | Why |
138
+ |:-----------|:--------|:----|
139
+ | **Node.js** | 14+ | npm package manager |
140
+ | **Python** | 3.11+ | V3 engine runtime |
141
+ | **pip** | Latest | Python dependency installer |
142
+
143
+ > All Python dependencies are installed automatically during `npm install`. You don't need to run pip manually. If any dependency fails, the installer shows clear instructions.
144
+
145
+ ---
146
+
133
147
  ## Quick Start
134
148
 
135
- ### Install via npm (recommended)
149
+ ### Install via npm (recommended — one command, everything included)
136
150
 
137
151
  ```bash
138
152
  npm install -g superlocalmemory
139
- slm setup
153
+ ```
154
+
155
+ This single command:
156
+ - Installs the V3 engine and CLI
157
+ - Auto-installs all Python dependencies (numpy, scipy, networkx, sentence-transformers, etc.)
158
+ - Creates the data directory at `~/.superlocalmemory/`
159
+ - Detects and guides V2 migration if applicable
160
+
161
+ Then configure:
162
+ ```bash
163
+ slm setup # Choose mode, configure provider
140
164
  ```
141
165
 
142
166
  ### Install via pip
@@ -182,6 +206,27 @@ Add to your IDE's MCP config:
182
206
 
183
207
  24 MCP tools available: `remember`, `recall`, `search`, `fetch`, `list_recent`, `get_status`, `build_graph`, `switch_profile`, `health`, `consistency_check`, `recall_trace`, and more.
184
208
 
209
+ ### Web Dashboard (17 tabs)
210
+
211
+ ```bash
212
+ slm dashboard # Opens at http://localhost:8765
213
+ ```
214
+
215
+ The V3 dashboard provides real-time visibility into your memory system:
216
+
217
+ - **Dashboard** — Mode switcher, health score, quick store/recall
218
+ - **Recall Lab** — Search with per-channel score breakdown (Semantic, BM25, Entity, Temporal)
219
+ - **Knowledge Graph** — Interactive entity relationship visualization
220
+ - **Memories** — Browse, search, and manage stored memories
221
+ - **Trust Dashboard** — Bayesian trust scores per agent with Beta distribution visualization
222
+ - **Math Health** — Fisher-Rao confidence, Sheaf consistency, Langevin lifecycle state
223
+ - **Compliance** — GDPR export/erasure, EU AI Act status, audit trail
224
+ - **Learning** — Adaptive ranking progress, behavioral patterns, outcome tracking
225
+ - **IDE Connections** — Connected AI tools status and configuration
226
+ - **Settings** — Mode, provider, auto-capture/recall configuration
227
+
228
+ > The dashboard runs locally at `http://localhost:8765`. No data leaves your machine.
229
+
185
230
  ---
186
231
 
187
232
  ## V3 Engine Features
@@ -238,7 +283,7 @@ Evaluated on the [LoCoMo benchmark](https://arxiv.org/abs/2402.09714) (Long Conv
238
283
  | EverMemOS | 92.3% | Yes | No | No |
239
284
  | MemMachine | 91.7% | Yes | No | No |
240
285
  | Hindsight | 89.6% | Yes | No | No |
241
- | **SLM V3 Mode C** | **~78%** | Optional | **Yes** | Partial |
286
+ | **SLM V3 Mode C** | **87.7%** | Optional | **Yes** | Partial |
242
287
  | **SLM V3 Mode A** | **62.3%** | **No** | **Yes** | **Yes** |
243
288
  | Mem0 ($24M) | 34.2% F1 | Yes | Partial | No |
244
289
 
@@ -6,6 +6,7 @@ mcp>=1.0.0
6
6
  python-dateutil>=2.9.0.post0
7
7
  rank-bm25>=0.2.2
8
8
  vadersentiment>=3.3.2
9
+ einops>=0.8.2
9
10
 
10
11
  [dev]
11
12
  pytest>=8.0
@@ -23,6 +24,7 @@ orjson<4.0.0,>=3.9.0
23
24
 
24
25
  [search]
25
26
  sentence-transformers<4.0.0,>=2.5.0
27
+ einops<1.0.0,>=0.7.0
26
28
  torch>=2.2.0
27
29
  scikit-learn<2.0.0,>=1.3.0
28
30
  geoopt>=0.5.0
package/ui/index.html CHANGED
@@ -18,15 +18,17 @@
18
18
  --slm-gradient-warning: linear-gradient(135deg, #f093fb 0%, #f5576c 100%);
19
19
  }
20
20
 
21
- body {
22
- font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
23
- overflow-y: scroll; /* Always show scrollbar to prevent layout shift */
24
- }
25
-
26
21
  html {
27
22
  scroll-behavior: smooth;
28
23
  }
29
24
 
25
+ body {
26
+ font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
27
+ overflow-x: auto;
28
+ overflow-y: scroll;
29
+ min-width: min-content;
30
+ }
31
+
30
32
  [data-bs-theme="light"] body,
31
33
  [data-bs-theme="light"] {
32
34
  background: #f8f9fa;
@@ -77,11 +79,13 @@
77
79
 
78
80
  .tab-content {
79
81
  min-height: 700px;
80
- overflow: visible;
82
+ overflow-x: auto;
83
+ overflow-y: visible;
81
84
  }
82
85
 
83
86
  .tab-pane {
84
- overflow: visible;
87
+ overflow-x: auto;
88
+ overflow-y: visible;
85
89
  }
86
90
 
87
91
  #graph-container {
@@ -698,7 +702,7 @@
698
702
  </div>
699
703
 
700
704
  <!-- Main Content Tabs -->
701
- <ul class="nav nav-tabs mb-3" id="mainTabs" role="tablist">
705
+ <ul class="nav nav-tabs flex-wrap mb-3" id="mainTabs" role="tablist">
702
706
  <li class="nav-item">
703
707
  <button class="nav-link active" id="dashboard-tab" data-bs-toggle="tab" data-bs-target="#dashboard-pane">
704
708
  <i class="bi bi-speedometer2"></i> Dashboard
@@ -54,7 +54,26 @@ document.getElementById('recall-lab-search')?.addEventListener('click', function
54
54
  var maxChannel = Math.max(channels.semantic || 0, channels.bm25 || 0, channels.entity_graph || 0, channels.temporal || 0) || 1;
55
55
 
56
56
  var item = document.createElement('div');
57
- item.className = 'list-group-item';
57
+ item.className = 'list-group-item list-group-item-action';
58
+ item.style.cursor = 'pointer';
59
+ item.title = 'Click to view full memory';
60
+ (function(result) {
61
+ item.addEventListener('click', function() {
62
+ if (typeof openMemoryDetail === 'function') {
63
+ openMemoryDetail({
64
+ id: result.fact_id,
65
+ content: result.content,
66
+ score: result.score,
67
+ importance: Math.round((result.confidence || 0.5) * 10),
68
+ category: 'recall',
69
+ tags: Object.keys(result.channel_scores || {}).join(', '),
70
+ created_at: null,
71
+ trust_score: result.trust_score,
72
+ channel_scores: result.channel_scores
73
+ });
74
+ }
75
+ });
76
+ })(r);
58
77
 
59
78
  var header = document.createElement('h6');
60
79
  header.className = 'mb-1';