code-context-engine 0.4.4__py3-none-any.whl → 0.4.6__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- {code_context_engine-0.4.4.dist-info → code_context_engine-0.4.6.dist-info}/METADATA +33 -16
- {code_context_engine-0.4.4.dist-info → code_context_engine-0.4.6.dist-info}/RECORD +10 -10
- context_engine/cli.py +11 -7
- context_engine/config.py +2 -5
- context_engine/retrieval/confidence.py +2 -2
- context_engine/retrieval/retriever.py +17 -1
- {code_context_engine-0.4.4.dist-info → code_context_engine-0.4.6.dist-info}/WHEEL +0 -0
- {code_context_engine-0.4.4.dist-info → code_context_engine-0.4.6.dist-info}/entry_points.txt +0 -0
- {code_context_engine-0.4.4.dist-info → code_context_engine-0.4.6.dist-info}/licenses/LICENSE +0 -0
- {code_context_engine-0.4.4.dist-info → code_context_engine-0.4.6.dist-info}/top_level.txt +0 -0
|
@@ -1,14 +1,14 @@
|
|
|
1
1
|
Metadata-Version: 2.4
|
|
2
2
|
Name: code-context-engine
|
|
3
|
-
Version: 0.4.
|
|
4
|
-
Summary: Index your codebase. AI searches instead of re-reading files.
|
|
3
|
+
Version: 0.4.6
|
|
4
|
+
Summary: Index your codebase. AI searches instead of re-reading files. 94% token savings, benchmarked on FastAPI. Works with Claude Code, Cursor, VS Code, Gemini CLI, and Codex.
|
|
5
5
|
Author-email: Fazle Elahee <felahee@gmail.com>, Raj <rajkumar.sakti@gmail.com>
|
|
6
6
|
License-Expression: MIT
|
|
7
7
|
Project-URL: Homepage, https://github.com/elara-labs/code-context-engine
|
|
8
8
|
Project-URL: Repository, https://github.com/elara-labs/code-context-engine
|
|
9
9
|
Project-URL: Issues, https://github.com/elara-labs/code-context-engine/issues
|
|
10
10
|
Keywords: claude,context,mcp,llm,code-indexing,vector-search
|
|
11
|
-
Classifier: Development Status ::
|
|
11
|
+
Classifier: Development Status :: 4 - Beta
|
|
12
12
|
Classifier: Intended Audience :: Developers
|
|
13
13
|
Classifier: Programming Language :: Python :: 3
|
|
14
14
|
Classifier: Programming Language :: Python :: 3.11
|
|
@@ -37,6 +37,7 @@ Requires-Dist: httpx>=0.27
|
|
|
37
37
|
Requires-Dist: fastapi>=0.110
|
|
38
38
|
Requires-Dist: uvicorn>=0.29
|
|
39
39
|
Requires-Dist: aiohttp>=3.9
|
|
40
|
+
Requires-Dist: psutil>=5.9
|
|
40
41
|
Provides-Extra: dev
|
|
41
42
|
Requires-Dist: pytest>=8.0; extra == "dev"
|
|
42
43
|
Requires-Dist: pytest-asyncio>=0.23; extra == "dev"
|
|
@@ -53,10 +54,11 @@ Dynamic: license-file
|
|
|
53
54
|
<h1 align="center">Code Context Engine</h1>
|
|
54
55
|
|
|
55
56
|
<p align="center">
|
|
56
|
-
<strong>Index your codebase. AI searches instead of re-reading files.
|
|
57
|
+
<strong>Index your codebase. AI searches instead of re-reading files. 94% token savings, benchmarked.</strong>
|
|
57
58
|
</p>
|
|
58
59
|
|
|
59
60
|
<p align="center">
|
|
61
|
+
<a href="https://github.com/elara-labs/code-context-engine/actions/workflows/ci.yml"><img src="https://github.com/elara-labs/code-context-engine/actions/workflows/ci.yml/badge.svg" alt="CI"></a>
|
|
60
62
|
<a href="https://pypi.org/project/code-context-engine/"><img src="https://img.shields.io/pypi/v/code-context-engine?color=blue&label=PyPI" alt="PyPI"></a>
|
|
61
63
|
<a href="https://www.python.org/downloads/"><img src="https://img.shields.io/badge/python-3.11+-blue.svg" alt="Python 3.11+"></a>
|
|
62
64
|
<a href="https://modelcontextprotocol.io"><img src="https://img.shields.io/badge/MCP-compatible-green.svg" alt="MCP Compatible"></a>
|
|
@@ -87,6 +89,14 @@ Dynamic: license-file
|
|
|
87
89
|
|
|
88
90
|
---
|
|
89
91
|
|
|
92
|
+
## System requirements
|
|
93
|
+
|
|
94
|
+
- Python 3.11+
|
|
95
|
+
- A C compiler and `cmake` (needed to build tree-sitter grammars)
|
|
96
|
+
- **macOS:** `xcode-select --install`
|
|
97
|
+
- **Ubuntu/Debian:** `sudo apt install build-essential cmake`
|
|
98
|
+
- **Windows:** Visual Studio Build Tools (C++ workload)
|
|
99
|
+
|
|
90
100
|
## Install and see savings in 60 seconds
|
|
91
101
|
|
|
92
102
|
```bash
|
|
@@ -112,7 +122,7 @@ Multiple editors in the same project? All get configured in one command.
|
|
|
112
122
|
```
|
|
113
123
|
my-project · 38 queries
|
|
114
124
|
|
|
115
|
-
⛁ ⛁ ⛁ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶
|
|
125
|
+
⛁ ⛁ ⛁ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ 94% tokens saved
|
|
116
126
|
|
|
117
127
|
Without CCE 48.0k tokens $0.24
|
|
118
128
|
With CCE 3.4k tokens $0.02
|
|
@@ -126,7 +136,7 @@ Multiple editors in the same project? All get configured in one command.
|
|
|
126
136
|
|
|
127
137
|
## Why this matters
|
|
128
138
|
|
|
129
|
-
Input tokens are 85-95% of your Claude Code bill. CCE cuts them by
|
|
139
|
+
Input tokens are 85-95% of your Claude Code bill. CCE cuts them by 94% ([benchmarked on FastAPI](#benchmark-fastapi-independently-verified)).
|
|
130
140
|
|
|
131
141
|
```
|
|
132
142
|
Without CCE: Claude reads payments.py + shipping.py = 45,000 tokens
|
|
@@ -144,17 +154,16 @@ With CCE: context_search "payment flow" = 800 tokens
|
|
|
144
154
|
|
|
145
155
|
## Benchmark: FastAPI (independently verified)
|
|
146
156
|
|
|
147
|
-
We benchmarked CCE against [FastAPI](https://github.com/fastapi/fastapi) (
|
|
157
|
+
We benchmarked CCE against [FastAPI](https://github.com/fastapi/fastapi) (53 source files, 180K tokens) with 20 real coding questions. No cherry-picking, no synthetic queries.
|
|
148
158
|
|
|
149
159
|
**Methodology:** For each query, "without CCE" means reading the full content of every file the query touches. "With CCE" means the relevant chunks after compression. This is conservative (agents often read more files than needed).
|
|
150
160
|
|
|
151
161
|
| Metric | Result |
|
|
152
162
|
|--------|--------|
|
|
153
|
-
| **Retrieval** | **
|
|
154
|
-
| **+ Compression** | **
|
|
155
|
-
| **Combined** | **99.
|
|
156
|
-
| Recall@10 (found the right files) | 0.
|
|
157
|
-
| Precision@10 | 0.30 |
|
|
163
|
+
| **Retrieval** | **94%** savings (83,681 → 4,927 tokens/query) |
|
|
164
|
+
| **+ Compression** | **89%** additional (4,927 → 523 tokens/query) |
|
|
165
|
+
| **Combined** | **99.4%** (83,681 → 523 tokens/query) |
|
|
166
|
+
| Recall@10 (found the right files) | 0.90 |
|
|
158
167
|
| Latency p50 | 0.4ms |
|
|
159
168
|
| Queries tested | 20 |
|
|
160
169
|
|
|
@@ -162,8 +171,8 @@ We benchmarked CCE against [FastAPI](https://github.com/fastapi/fastapi) (48 sou
|
|
|
162
171
|
|
|
163
172
|
| Layer | What it does | Savings | Method |
|
|
164
173
|
|-------|-------------|---------|--------|
|
|
165
|
-
| **Retrieval** | Full files → relevant code chunks |
|
|
166
|
-
| **Chunk Compression** | Raw chunks → signatures + docstrings |
|
|
174
|
+
| **Retrieval** | Full files → relevant code chunks | 94% | measured |
|
|
175
|
+
| **Chunk Compression** | Raw chunks → signatures + docstrings | 89% | measured |
|
|
167
176
|
| **Output Compression** | Reduces Claude's reply length | 65% | estimated |
|
|
168
177
|
| **Grammar** | Drops articles/fillers from memory text | 13% | measured |
|
|
169
178
|
|
|
@@ -210,6 +219,14 @@ cce savings --all # see savings across all projects
|
|
|
210
219
|
|
|
211
220
|
---
|
|
212
221
|
|
|
222
|
+
## How is CCE different?
|
|
223
|
+
|
|
224
|
+
CCE is editor-agnostic, local-first, and gives you measurable token savings. Your code never leaves your machine. Unlike built-in indexing (Cursor, Continue), CCE works across Claude Code, VS Code, Cursor, Gemini CLI, and Codex with a single index. Unlike cloud tools (Greptile), it's free and private.
|
|
225
|
+
|
|
226
|
+
See the [full comparison with alternatives](docs/comparison.md) for an honest look at trade-offs.
|
|
227
|
+
|
|
228
|
+
---
|
|
229
|
+
|
|
213
230
|
## How it works (the short version)
|
|
214
231
|
|
|
215
232
|
1. **Index:** Tree-sitter parses your code into semantic chunks (functions, classes, modules). Stored as vector embeddings locally.
|
|
@@ -228,7 +245,7 @@ Re-indexing after edits takes under 1 second (96% embedding cache hit rate). Git
|
|
|
228
245
|
|
|
229
246
|
Output compression tools (like Caveman) save 20-75% on output tokens. Output is 5-15% of your bill. Net savings: ~11%.
|
|
230
247
|
|
|
231
|
-
CCE saves on **input** tokens (
|
|
248
|
+
CCE saves on **input** tokens (94% retrieval + 89% compression on FastAPI, [independently benchmarked](#benchmark-fastapi-independently-verified)). Input is 85-95% of your bill.
|
|
232
249
|
|
|
233
250
|
### It actually understands your code
|
|
234
251
|
|
|
@@ -398,7 +415,7 @@ No GPU required. Embedding model runs on CPU via ONNX Runtime.
|
|
|
398
415
|
- [x] Clean uninstall (removes all CCE artifacts)
|
|
399
416
|
- [x] AST-aware chunking for PHP, Go, Rust, Java (tree-sitter)
|
|
400
417
|
- [x] Multi-editor support (Cursor, VS Code/Copilot, Gemini CLI)
|
|
401
|
-
- [x] Reproducible benchmark suite (
|
|
418
|
+
- [x] Reproducible benchmark suite (94% savings on FastAPI, per-layer breakdown)
|
|
402
419
|
- [x] Session savings visibility (shown at every session start)
|
|
403
420
|
- [ ] Tree-sitter support for C, C++, Ruby, Swift, Kotlin
|
|
404
421
|
- [ ] Docker support for remote mode
|
|
@@ -1,8 +1,8 @@
|
|
|
1
|
-
code_context_engine-0.4.
|
|
1
|
+
code_context_engine-0.4.6.dist-info/licenses/LICENSE,sha256=vLbw0GGCVJSIRppMus7Oq0PyMDhDXz-dfvz2rPpWtjQ,1069
|
|
2
2
|
context_engine/__init__.py,sha256=qThGxB7xfZi5M9jDpUno0MKBp7KKrEOdH1hG4wHMuLc,193
|
|
3
|
-
context_engine/cli.py,sha256=
|
|
3
|
+
context_engine/cli.py,sha256=3m4UHGkURqlhnyr6AslU7fuansvD6TMR2VDrPjd0Yhc,113375
|
|
4
4
|
context_engine/cli_style.py,sha256=a3l3Smq1gIN2asbNalFUz0i_5x7Tmkp_wEhyGMoo8a4,2460
|
|
5
|
-
context_engine/config.py,sha256=
|
|
5
|
+
context_engine/config.py,sha256=2pCe-nJbB1IgrFwT3iMOmUSZMaaH2IBamm867CULdnY,7129
|
|
6
6
|
context_engine/editors.py,sha256=LT_WdYwyB1EeH1xsB6DDtJlbGMKXOTIVSOWXOOfXh1U,8970
|
|
7
7
|
context_engine/event_bus.py,sha256=g4r9QKuC-Y7RmrjOTlUrJSB-bBTpAqp0-_IXeMUg4wo,775
|
|
8
8
|
context_engine/models.py,sha256=XBbM0CUqNDQ5MOp6F3STST2qLqy2Zk0m050ZtWdXkrk,2048
|
|
@@ -46,9 +46,9 @@ context_engine/memory/hook_server.py,sha256=y62r7TGxXIDIAMiAcebIyqHE0fU5u-1dq3qG
|
|
|
46
46
|
context_engine/memory/hooks.py,sha256=q3UAOOsRJAiYh_OFj6AsemUVy95tsZAp-S8ok3WRTUg,14129
|
|
47
47
|
context_engine/memory/migrate.py,sha256=X5w6t96mWXz5p_CtEHYNQEpmCvcnwJ0uVsXeM9k3Xro,9447
|
|
48
48
|
context_engine/retrieval/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
|
49
|
-
context_engine/retrieval/confidence.py,sha256=
|
|
49
|
+
context_engine/retrieval/confidence.py,sha256=y7fcIPKzSE0dXRNmoHid4z4otxiE-at901G0jK2gujA,1621
|
|
50
50
|
context_engine/retrieval/query_parser.py,sha256=ljxkD25HVE4yrikmq2UJO6aLPTnDM43RLpYWQ1huh6k,3532
|
|
51
|
-
context_engine/retrieval/retriever.py,sha256=
|
|
51
|
+
context_engine/retrieval/retriever.py,sha256=LeZROv0k-WGjJTZ_A4WDK5pNivq-s6b2-Mk-9A5YAO0,8947
|
|
52
52
|
context_engine/storage/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
|
53
53
|
context_engine/storage/backend.py,sha256=2GjQ8wqs09lG-plqJOH9pBjvb3hZrFLIq1332BRnF2E,1062
|
|
54
54
|
context_engine/storage/fts_store.py,sha256=cUvftCeBxD_2jLG-rjV2iOjg-LeUI0dmtxfUYA_ICjQ,3991
|
|
@@ -56,8 +56,8 @@ context_engine/storage/graph_store.py,sha256=mftuJFvlkFeBlzMsQorY5YS4l5wsDUxCMw5
|
|
|
56
56
|
context_engine/storage/local_backend.py,sha256=5MVoAn6Jkiltho-9BjClisLkyXMkSZZc2Z_h3N7Vfcg,4200
|
|
57
57
|
context_engine/storage/remote_backend.py,sha256=u77lnGIvqrL3PwInjT6nfRgyNn6oVxW92KUK66oWrvI,5504
|
|
58
58
|
context_engine/storage/vector_store.py,sha256=tA0ol_v5B2KRNMt2hE2kI4qnYe_AoYP_HSp1MvzcsFU,14704
|
|
59
|
-
code_context_engine-0.4.
|
|
60
|
-
code_context_engine-0.4.
|
|
61
|
-
code_context_engine-0.4.
|
|
62
|
-
code_context_engine-0.4.
|
|
63
|
-
code_context_engine-0.4.
|
|
59
|
+
code_context_engine-0.4.6.dist-info/METADATA,sha256=EMG78MwkTq3rKZK-PBuafCTw95vh2n8URpfmN_r9JC8,18507
|
|
60
|
+
code_context_engine-0.4.6.dist-info/WHEEL,sha256=aeYiig01lYGDzBgS8HxWXOg3uV61G9ijOsup-k9o1sk,91
|
|
61
|
+
code_context_engine-0.4.6.dist-info/entry_points.txt,sha256=DQuRWUuVFM7nPcXtDmJzlem7QA0IboD_4N8AnTtDD9Q,144
|
|
62
|
+
code_context_engine-0.4.6.dist-info/top_level.txt,sha256=X1-RUqb61WXBjy3JjsW2oXwfvqk2ydXKDNidxmw4CZ4,15
|
|
63
|
+
code_context_engine-0.4.6.dist-info/RECORD,,
|
context_engine/cli.py
CHANGED
|
@@ -1674,12 +1674,10 @@ def search(ctx: click.Context, query: str, top_k: int) -> None:
|
|
|
1674
1674
|
lines.append(f" {DOT} {dim('No results found')}")
|
|
1675
1675
|
else:
|
|
1676
1676
|
# Compute tokens
|
|
1677
|
-
raw_tokens = 0
|
|
1678
1677
|
served_tokens = 0
|
|
1679
1678
|
seen_files: set[str] = set()
|
|
1680
1679
|
for r in results:
|
|
1681
1680
|
chunk_tokens = max(1, len(r.content) // 4)
|
|
1682
|
-
raw_tokens += chunk_tokens
|
|
1683
1681
|
served_tokens += chunk_tokens
|
|
1684
1682
|
seen_files.add(r.file_path)
|
|
1685
1683
|
|
|
@@ -1702,7 +1700,8 @@ def search(ctx: click.Context, query: str, top_k: int) -> None:
|
|
|
1702
1700
|
lines.append(f" {dim(first_line)}")
|
|
1703
1701
|
|
|
1704
1702
|
lines.append("")
|
|
1705
|
-
|
|
1703
|
+
savings_pct = int((1 - served_tokens / full_file_tokens) * 100) if full_file_tokens > 0 else 0
|
|
1704
|
+
lines.append(f" {CHECK} {success(f'{len(results)} results')} {dim(f'{served_tokens} tokens served vs {full_file_tokens} full file tokens ({savings_pct}% saved)')}")
|
|
1706
1705
|
|
|
1707
1706
|
# Update stats
|
|
1708
1707
|
stats_path = storage_dir / "stats.json"
|
|
@@ -1711,10 +1710,8 @@ def search(ctx: click.Context, query: str, top_k: int) -> None:
|
|
|
1711
1710
|
except (json.JSONDecodeError, OSError):
|
|
1712
1711
|
stats = {}
|
|
1713
1712
|
stats["queries"] = stats.get("queries", 0) + 1
|
|
1714
|
-
stats["
|
|
1713
|
+
stats["full_file_tokens"] = stats.get("full_file_tokens", 0) + full_file_tokens
|
|
1715
1714
|
stats["served_tokens"] = stats.get("served_tokens", 0) + served_tokens
|
|
1716
|
-
stats.setdefault("full_file_tokens", 0)
|
|
1717
|
-
stats["full_file_tokens"] = max(stats["full_file_tokens"], full_file_tokens)
|
|
1718
1715
|
stats_path.write_text(json.dumps(stats))
|
|
1719
1716
|
|
|
1720
1717
|
lines.append("")
|
|
@@ -1724,12 +1721,19 @@ def search(ctx: click.Context, query: str, top_k: int) -> None:
|
|
|
1724
1721
|
|
|
1725
1722
|
|
|
1726
1723
|
@main.command()
|
|
1727
|
-
|
|
1724
|
+
@click.option("--yes", "-y", is_flag=True, help="Skip confirmation prompt")
|
|
1725
|
+
def uninstall(yes: bool) -> None:
|
|
1728
1726
|
"""Remove CCE from the current project (hooks, .mcp.json entry, CLAUDE.md block)."""
|
|
1729
1727
|
from context_engine.cli_style import section, animate, value, dim, success, warn, CHECK, CROSS, DOT
|
|
1730
1728
|
|
|
1731
1729
|
project_dir = Path.cwd()
|
|
1732
1730
|
project_name = project_dir.name
|
|
1731
|
+
|
|
1732
|
+
if not yes:
|
|
1733
|
+
if not click.confirm(f"Remove CCE from {project_name}?", default=False):
|
|
1734
|
+
click.echo("Cancelled.")
|
|
1735
|
+
return
|
|
1736
|
+
|
|
1733
1737
|
lines: list[str] = []
|
|
1734
1738
|
lines.append("")
|
|
1735
1739
|
lines.append(section(f"Uninstall · {project_name}"))
|
context_engine/config.py
CHANGED
|
@@ -90,11 +90,8 @@ class Config:
|
|
|
90
90
|
storage_path: str = str(_CCE_HOME / "projects")
|
|
91
91
|
|
|
92
92
|
def detect_resource_profile(self) -> str:
|
|
93
|
-
|
|
94
|
-
|
|
95
|
-
ram_gb = psutil.virtual_memory().total / (1024 ** 3)
|
|
96
|
-
except ImportError:
|
|
97
|
-
ram_gb = 16
|
|
93
|
+
import psutil
|
|
94
|
+
ram_gb = psutil.virtual_memory().total / (1024 ** 3)
|
|
98
95
|
if ram_gb >= 32:
|
|
99
96
|
return "full"
|
|
100
97
|
if ram_gb >= 12:
|
|
@@ -14,8 +14,8 @@ import time
|
|
|
14
14
|
from context_engine.models import Chunk
|
|
15
15
|
|
|
16
16
|
_VECTOR_WEIGHT = 0.5
|
|
17
|
-
_KEYWORD_WEIGHT = 0.
|
|
18
|
-
_RECENCY_WEIGHT = 0.
|
|
17
|
+
_KEYWORD_WEIGHT = 0.4
|
|
18
|
+
_RECENCY_WEIGHT = 0.1
|
|
19
19
|
_MAX_KEYWORD_DISTANCE = 5
|
|
20
20
|
_RECENCY_HALF_LIFE = 7 * 24 * 3600 # 1 week
|
|
21
21
|
|
|
@@ -15,6 +15,9 @@ _RRF_K = 60
|
|
|
15
15
|
# [0,1] by the best score in the candidate set so an exact-match FTS rank-1 hit
|
|
16
16
|
# scores the same as a vector rank-1 hit instead of being clamped to ~1.0.
|
|
17
17
|
_CONFIDENCE_WEIGHT = 0.5
|
|
18
|
+
# Max chunks from the same file in the final result set. Prevents one large
|
|
19
|
+
# file from dominating results and improves file-level precision.
|
|
20
|
+
_MAX_CHUNKS_PER_FILE = 3
|
|
18
21
|
# When the parsed query looks like a code lookup, give FTS more pull because
|
|
19
22
|
# exact-identifier hits are usually what the user wants.
|
|
20
23
|
_FTS_BOOST_CODE_LOOKUP = 1.5
|
|
@@ -129,7 +132,20 @@ class HybridRetriever:
|
|
|
129
132
|
scored.append((chunk, final_score))
|
|
130
133
|
|
|
131
134
|
scored.sort(key=lambda x: x[1], reverse=True)
|
|
132
|
-
|
|
135
|
+
|
|
136
|
+
# File diversity: cap chunks per file so one large file doesn't
|
|
137
|
+
# dominate the result set. This improves precision by letting
|
|
138
|
+
# chunks from more files surface into the top-k.
|
|
139
|
+
file_counts: dict[str, int] = {}
|
|
140
|
+
diverse: list[Chunk] = []
|
|
141
|
+
for chunk, _ in scored:
|
|
142
|
+
count = file_counts.get(chunk.file_path, 0)
|
|
143
|
+
if count < _MAX_CHUNKS_PER_FILE:
|
|
144
|
+
diverse.append(chunk)
|
|
145
|
+
file_counts[chunk.file_path] = count + 1
|
|
146
|
+
if len(diverse) >= top_k:
|
|
147
|
+
break
|
|
148
|
+
ranked = diverse
|
|
133
149
|
|
|
134
150
|
# Graph expansion: fetch 1-2 bonus chunks from files reachable via
|
|
135
151
|
# CALLS/IMPORTS edges from the top results.
|
|
File without changes
|
{code_context_engine-0.4.4.dist-info → code_context_engine-0.4.6.dist-info}/entry_points.txt
RENAMED
|
File without changes
|
{code_context_engine-0.4.4.dist-info → code_context_engine-0.4.6.dist-info}/licenses/LICENSE
RENAMED
|
File without changes
|
|
File without changes
|