vimlm 0.0.3__tar.gz → 0.0.5__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- vimlm-0.0.5/PKG-INFO +108 -0
- vimlm-0.0.5/README.md +86 -0
- {vimlm-0.0.3 → vimlm-0.0.5}/setup.py +2 -2
- vimlm-0.0.5/vimlm.egg-info/PKG-INFO +108 -0
- vimlm-0.0.5/vimlm.egg-info/requires.txt +3 -0
- {vimlm-0.0.3 → vimlm-0.0.5}/vimlm.py +49 -31
- vimlm-0.0.3/PKG-INFO +0 -84
- vimlm-0.0.3/README.md +0 -63
- vimlm-0.0.3/vimlm.egg-info/PKG-INFO +0 -84
- vimlm-0.0.3/vimlm.egg-info/requires.txt +0 -2
- {vimlm-0.0.3 → vimlm-0.0.5}/LICENSE +0 -0
- {vimlm-0.0.3 → vimlm-0.0.5}/setup.cfg +0 -0
- {vimlm-0.0.3 → vimlm-0.0.5}/vimlm.egg-info/SOURCES.txt +0 -0
- {vimlm-0.0.3 → vimlm-0.0.5}/vimlm.egg-info/dependency_links.txt +0 -0
- {vimlm-0.0.3 → vimlm-0.0.5}/vimlm.egg-info/entry_points.txt +0 -0
- {vimlm-0.0.3 → vimlm-0.0.5}/vimlm.egg-info/top_level.txt +0 -0
vimlm-0.0.5/PKG-INFO
ADDED
@@ -0,0 +1,108 @@
|
|
1
|
+
Metadata-Version: 2.2
|
2
|
+
Name: vimlm
|
3
|
+
Version: 0.0.5
|
4
|
+
Summary: VimLM - LLM-powered Vim assistant
|
5
|
+
Home-page: https://github.com/JosefAlbers/vimlm
|
6
|
+
Author: Josef Albers
|
7
|
+
Author-email: albersj66@gmail.com
|
8
|
+
Requires-Python: >=3.12.8
|
9
|
+
Description-Content-Type: text/markdown
|
10
|
+
License-File: LICENSE
|
11
|
+
Requires-Dist: nanollama==0.0.5b0
|
12
|
+
Requires-Dist: mlx_lm_utils==0.0.1a0
|
13
|
+
Requires-Dist: watchfiles==1.0.4
|
14
|
+
Dynamic: author
|
15
|
+
Dynamic: author-email
|
16
|
+
Dynamic: description
|
17
|
+
Dynamic: description-content-type
|
18
|
+
Dynamic: home-page
|
19
|
+
Dynamic: requires-dist
|
20
|
+
Dynamic: requires-python
|
21
|
+
Dynamic: summary
|
22
|
+
|
23
|
+
# VimLM - Local LLM-Powered Coding Assistant for Vim
|
24
|
+
|
25
|
+

|
26
|
+
|
27
|
+
An LLM-powered coding companion for Vim, inspired by GitHub Copilot/Cursor. Integrates contextual code understanding, summarization, and AI assistance directly into your Vim workflow.
|
28
|
+
|
29
|
+
## Features
|
30
|
+
|
31
|
+
- **Model Agnostic** - Use any MLX-compatible model via config file
|
32
|
+
- **Vim-Native UX** - Ctrl-l/Ctrl-r keybindings and split-window responses
|
33
|
+
- **Deep Context** - Understands code context from:
|
34
|
+
- Current file
|
35
|
+
- Visual selections
|
36
|
+
- Referenced files (`!@#$` syntax)
|
37
|
+
- Project directory structure
|
38
|
+
- **Conversational Coding** - Iterative refinement with follow-up queries
|
39
|
+
- **Air-Gapped Security** - 100% offline - no APIs, no tracking, no data leaks
|
40
|
+
|
41
|
+
## Requirements
|
42
|
+
|
43
|
+
- Apple M-series chip (M1/M2/M3/M4)
|
44
|
+
- Python 3.12.8
|
45
|
+
|
46
|
+
## Installation
|
47
|
+
|
48
|
+
```zsh
|
49
|
+
pip install vimlm
|
50
|
+
```
|
51
|
+
|
52
|
+
## Quick Start
|
53
|
+
|
54
|
+
1. Launch with default model (DeepSeek-R1-Distill-Qwen-7B-4bit):
|
55
|
+
|
56
|
+
```zsh
|
57
|
+
vimlm your_file.js
|
58
|
+
```
|
59
|
+
|
60
|
+
2. **From Normal Mode**:
|
61
|
+
- `Ctrl-l`: Send current line + file context
|
62
|
+
- Example prompt: "Regex for removing html tags in item.content"
|
63
|
+
|
64
|
+
3. **From Visual Mode**:
|
65
|
+
- Select code → `Ctrl-l`: Send selection + file context
|
66
|
+
- Example prompt: "Convert this to async/await syntax"
|
67
|
+
|
68
|
+
4. **Add Context**: Use `!@#$` to include additional files/folders:
|
69
|
+
- `!@#$` (no path): Current folder
|
70
|
+
- `!@#$ ~/scrap/jph00/hypermedia-applications.summ.md`: Specific folder
|
71
|
+
- `!@#$ ~/wtm/utils.py`: Specific file
|
72
|
+
- Example prompt: "AJAX-ify this app !@#$ ~/scrap/jph00/hypermedia-applications.summ.md"
|
73
|
+
|
74
|
+
5. **Follow-Up**: After initial response:
|
75
|
+
- `Ctrl-r`: Continue thread
|
76
|
+
- Example follow-up: "In Manifest V3"
|
77
|
+
|
78
|
+
## Advanced Configuration
|
79
|
+
|
80
|
+
### Custom Model Setup
|
81
|
+
|
82
|
+
1. **Browse models**: [MLX Community Models on Hugging Face](https://huggingface.co/mlx-community)
|
83
|
+
|
84
|
+
2. **Edit config file**:
|
85
|
+
|
86
|
+
```json
|
87
|
+
{
|
88
|
+
"LLM_MODEL": "/path/to/your/mlx_model"
|
89
|
+
}
|
90
|
+
```
|
91
|
+
|
92
|
+
3. **Save to**:
|
93
|
+
|
94
|
+
```
|
95
|
+
~/vimlm/cfg.json
|
96
|
+
```
|
97
|
+
|
98
|
+
4. **Restart VimLM**
|
99
|
+
|
100
|
+
## Key Bindings
|
101
|
+
|
102
|
+
| Binding | Mode | Action |
|
103
|
+
|------------|---------------|----------------------------------------|
|
104
|
+
| `Ctrl-l` | Normal/Visual | Send current file + selection to LLM |
|
105
|
+
| `Ctrl-r` | Normal | Continue conversation |
|
106
|
+
| `Esc` | Prompt | Cancel input |
|
107
|
+
|
108
|
+
|
vimlm-0.0.5/README.md
ADDED
@@ -0,0 +1,86 @@
|
|
1
|
+
# VimLM - Local LLM-Powered Coding Assistant for Vim
|
2
|
+
|
3
|
+

|
4
|
+
|
5
|
+
An LLM-powered coding companion for Vim, inspired by GitHub Copilot/Cursor. Integrates contextual code understanding, summarization, and AI assistance directly into your Vim workflow.
|
6
|
+
|
7
|
+
## Features
|
8
|
+
|
9
|
+
- **Model Agnostic** - Use any MLX-compatible model via config file
|
10
|
+
- **Vim-Native UX** - Ctrl-l/Ctrl-r keybindings and split-window responses
|
11
|
+
- **Deep Context** - Understands code context from:
|
12
|
+
- Current file
|
13
|
+
- Visual selections
|
14
|
+
- Referenced files (`!@#$` syntax)
|
15
|
+
- Project directory structure
|
16
|
+
- **Conversational Coding** - Iterative refinement with follow-up queries
|
17
|
+
- **Air-Gapped Security** - 100% offline - no APIs, no tracking, no data leaks
|
18
|
+
|
19
|
+
## Requirements
|
20
|
+
|
21
|
+
- Apple M-series chip (M1/M2/M3/M4)
|
22
|
+
- Python 3.12.8
|
23
|
+
|
24
|
+
## Installation
|
25
|
+
|
26
|
+
```zsh
|
27
|
+
pip install vimlm
|
28
|
+
```
|
29
|
+
|
30
|
+
## Quick Start
|
31
|
+
|
32
|
+
1. Launch with default model (DeepSeek-R1-Distill-Qwen-7B-4bit):
|
33
|
+
|
34
|
+
```zsh
|
35
|
+
vimlm your_file.js
|
36
|
+
```
|
37
|
+
|
38
|
+
2. **From Normal Mode**:
|
39
|
+
- `Ctrl-l`: Send current line + file context
|
40
|
+
- Example prompt: "Regex for removing html tags in item.content"
|
41
|
+
|
42
|
+
3. **From Visual Mode**:
|
43
|
+
- Select code → `Ctrl-l`: Send selection + file context
|
44
|
+
- Example prompt: "Convert this to async/await syntax"
|
45
|
+
|
46
|
+
4. **Add Context**: Use `!@#$` to include additional files/folders:
|
47
|
+
- `!@#$` (no path): Current folder
|
48
|
+
- `!@#$ ~/scrap/jph00/hypermedia-applications.summ.md`: Specific folder
|
49
|
+
- `!@#$ ~/wtm/utils.py`: Specific file
|
50
|
+
- Example prompt: "AJAX-ify this app !@#$ ~/scrap/jph00/hypermedia-applications.summ.md"
|
51
|
+
|
52
|
+
5. **Follow-Up**: After initial response:
|
53
|
+
- `Ctrl-r`: Continue thread
|
54
|
+
- Example follow-up: "In Manifest V3"
|
55
|
+
|
56
|
+
## Advanced Configuration
|
57
|
+
|
58
|
+
### Custom Model Setup
|
59
|
+
|
60
|
+
1. **Browse models**: [MLX Community Models on Hugging Face](https://huggingface.co/mlx-community)
|
61
|
+
|
62
|
+
2. **Edit config file**:
|
63
|
+
|
64
|
+
```json
|
65
|
+
{
|
66
|
+
"LLM_MODEL": "/path/to/your/mlx_model"
|
67
|
+
}
|
68
|
+
```
|
69
|
+
|
70
|
+
3. **Save to**:
|
71
|
+
|
72
|
+
```
|
73
|
+
~/vimlm/cfg.json
|
74
|
+
```
|
75
|
+
|
76
|
+
4. **Restart VimLM**
|
77
|
+
|
78
|
+
## Key Bindings
|
79
|
+
|
80
|
+
| Binding | Mode | Action |
|
81
|
+
|------------|---------------|----------------------------------------|
|
82
|
+
| `Ctrl-l` | Normal/Visual | Send current file + selection to LLM |
|
83
|
+
| `Ctrl-r` | Normal | Continue conversation |
|
84
|
+
| `Esc` | Prompt | Cancel input |
|
85
|
+
|
86
|
+
|
@@ -5,7 +5,7 @@ with open("requirements.txt") as f:
|
|
5
5
|
|
6
6
|
setup(
|
7
7
|
name="vimlm",
|
8
|
-
version="0.0.
|
8
|
+
version="0.0.5",
|
9
9
|
author="Josef Albers",
|
10
10
|
author_email="albersj66@gmail.com",
|
11
11
|
readme='README.md',
|
@@ -15,7 +15,7 @@ setup(
|
|
15
15
|
url="https://github.com/JosefAlbers/vimlm",
|
16
16
|
# packages=find_packages(),
|
17
17
|
py_modules=['vimlm'],
|
18
|
-
python_requires=">=3.
|
18
|
+
python_requires=">=3.12.8",
|
19
19
|
install_requires=requirements,
|
20
20
|
entry_points={
|
21
21
|
"console_scripts": [
|
@@ -0,0 +1,108 @@
|
|
1
|
+
Metadata-Version: 2.2
|
2
|
+
Name: vimlm
|
3
|
+
Version: 0.0.5
|
4
|
+
Summary: VimLM - LLM-powered Vim assistant
|
5
|
+
Home-page: https://github.com/JosefAlbers/vimlm
|
6
|
+
Author: Josef Albers
|
7
|
+
Author-email: albersj66@gmail.com
|
8
|
+
Requires-Python: >=3.12.8
|
9
|
+
Description-Content-Type: text/markdown
|
10
|
+
License-File: LICENSE
|
11
|
+
Requires-Dist: nanollama==0.0.5b0
|
12
|
+
Requires-Dist: mlx_lm_utils==0.0.1a0
|
13
|
+
Requires-Dist: watchfiles==1.0.4
|
14
|
+
Dynamic: author
|
15
|
+
Dynamic: author-email
|
16
|
+
Dynamic: description
|
17
|
+
Dynamic: description-content-type
|
18
|
+
Dynamic: home-page
|
19
|
+
Dynamic: requires-dist
|
20
|
+
Dynamic: requires-python
|
21
|
+
Dynamic: summary
|
22
|
+
|
23
|
+
# VimLM - Local LLM-Powered Coding Assistant for Vim
|
24
|
+
|
25
|
+

|
26
|
+
|
27
|
+
An LLM-powered coding companion for Vim, inspired by GitHub Copilot/Cursor. Integrates contextual code understanding, summarization, and AI assistance directly into your Vim workflow.
|
28
|
+
|
29
|
+
## Features
|
30
|
+
|
31
|
+
- **Model Agnostic** - Use any MLX-compatible model via config file
|
32
|
+
- **Vim-Native UX** - Ctrl-l/Ctrl-r keybindings and split-window responses
|
33
|
+
- **Deep Context** - Understands code context from:
|
34
|
+
- Current file
|
35
|
+
- Visual selections
|
36
|
+
- Referenced files (`!@#$` syntax)
|
37
|
+
- Project directory structure
|
38
|
+
- **Conversational Coding** - Iterative refinement with follow-up queries
|
39
|
+
- **Air-Gapped Security** - 100% offline - no APIs, no tracking, no data leaks
|
40
|
+
|
41
|
+
## Requirements
|
42
|
+
|
43
|
+
- Apple M-series chip (M1/M2/M3/M4)
|
44
|
+
- Python 3.12.8
|
45
|
+
|
46
|
+
## Installation
|
47
|
+
|
48
|
+
```zsh
|
49
|
+
pip install vimlm
|
50
|
+
```
|
51
|
+
|
52
|
+
## Quick Start
|
53
|
+
|
54
|
+
1. Launch with default model (DeepSeek-R1-Distill-Qwen-7B-4bit):
|
55
|
+
|
56
|
+
```zsh
|
57
|
+
vimlm your_file.js
|
58
|
+
```
|
59
|
+
|
60
|
+
2. **From Normal Mode**:
|
61
|
+
- `Ctrl-l`: Send current line + file context
|
62
|
+
- Example prompt: "Regex for removing html tags in item.content"
|
63
|
+
|
64
|
+
3. **From Visual Mode**:
|
65
|
+
- Select code → `Ctrl-l`: Send selection + file context
|
66
|
+
- Example prompt: "Convert this to async/await syntax"
|
67
|
+
|
68
|
+
4. **Add Context**: Use `!@#$` to include additional files/folders:
|
69
|
+
- `!@#$` (no path): Current folder
|
70
|
+
- `!@#$ ~/scrap/jph00/hypermedia-applications.summ.md`: Specific folder
|
71
|
+
- `!@#$ ~/wtm/utils.py`: Specific file
|
72
|
+
- Example prompt: "AJAX-ify this app !@#$ ~/scrap/jph00/hypermedia-applications.summ.md"
|
73
|
+
|
74
|
+
5. **Follow-Up**: After initial response:
|
75
|
+
- `Ctrl-r`: Continue thread
|
76
|
+
- Example follow-up: "In Manifest V3"
|
77
|
+
|
78
|
+
## Advanced Configuration
|
79
|
+
|
80
|
+
### Custom Model Setup
|
81
|
+
|
82
|
+
1. **Browse models**: [MLX Community Models on Hugging Face](https://huggingface.co/mlx-community)
|
83
|
+
|
84
|
+
2. **Edit config file**:
|
85
|
+
|
86
|
+
```json
|
87
|
+
{
|
88
|
+
"LLM_MODEL": "/path/to/your/mlx_model"
|
89
|
+
}
|
90
|
+
```
|
91
|
+
|
92
|
+
3. **Save to**:
|
93
|
+
|
94
|
+
```
|
95
|
+
~/vimlm/cfg.json
|
96
|
+
```
|
97
|
+
|
98
|
+
4. **Restart VimLM**
|
99
|
+
|
100
|
+
## Key Bindings
|
101
|
+
|
102
|
+
| Binding | Mode | Action |
|
103
|
+
|------------|---------------|----------------------------------------|
|
104
|
+
| `Ctrl-l` | Normal/Visual | Send current file + selection to LLM |
|
105
|
+
| `Ctrl-r` | Normal | Continue conversation |
|
106
|
+
| `Esc` | Prompt | Cancel input |
|
107
|
+
|
108
|
+
|
@@ -17,7 +17,6 @@ import subprocess
|
|
17
17
|
import json
|
18
18
|
import os
|
19
19
|
from watchfiles import awatch
|
20
|
-
from nanollama32 import Chat
|
21
20
|
import shutil
|
22
21
|
import time
|
23
22
|
from itertools import accumulate
|
@@ -26,23 +25,22 @@ import tempfile
|
|
26
25
|
from pathlib import Path
|
27
26
|
from string import Template
|
28
27
|
|
29
|
-
DEBUG =
|
28
|
+
DEBUG = True
|
29
|
+
LLM_MODEL = "mlx-community/DeepSeek-R1-Distill-Qwen-7B-4bit"
|
30
30
|
NUM_TOKEN = 2000
|
31
31
|
SEP_CMD = '!@#$'
|
32
|
-
|
32
|
+
VIMLM_DIR = os.path.expanduser("~/vimlm")
|
33
33
|
WATCH_DIR = os.path.expanduser("~/vimlm/watch_dir")
|
34
|
+
CFG_FILE = "cfg.json"
|
34
35
|
LOG_FILE = "log.json"
|
35
|
-
|
36
|
+
LTM_FILE = "cache.json"
|
36
37
|
OUT_FILE = "response.md"
|
37
|
-
|
38
|
-
|
39
|
-
|
38
|
+
IN_FILES = ["context", "yank", "user", "tree"]
|
39
|
+
CFG_PATH = os.path.join(VIMLM_DIR, CFG_FILE)
|
40
|
+
LOG_PATH = os.path.join(VIMLM_DIR, LOG_FILE)
|
41
|
+
LTM_PATH = os.path.join(VIMLM_DIR, LTM_FILE)
|
40
42
|
OUT_PATH = os.path.join(WATCH_DIR, OUT_FILE)
|
41
43
|
|
42
|
-
if os.path.exists(WATCH_DIR):
|
43
|
-
shutil.rmtree(WATCH_DIR)
|
44
|
-
os.makedirs(WATCH_DIR)
|
45
|
-
|
46
44
|
def toout(s, key='tovim'):
|
47
45
|
with open(OUT_PATH, 'w', encoding='utf-8') as f:
|
48
46
|
f.write(s)
|
@@ -51,18 +49,40 @@ def toout(s, key='tovim'):
|
|
51
49
|
def tolog(log, key='debug'):
|
52
50
|
if not DEBUG and key == 'debug':
|
53
51
|
return
|
54
|
-
|
52
|
+
try:
|
55
53
|
with open(LOG_PATH, "r", encoding="utf-8") as log_f:
|
56
54
|
logs = json.load(log_f)
|
57
|
-
|
55
|
+
except:
|
58
56
|
logs = []
|
59
57
|
logs.append(dict(key=key, log=log, timestamp=time.ctime()))
|
60
58
|
with open(LOG_PATH, "w", encoding="utf-8") as log_f:
|
61
59
|
json.dump(logs, log_f, indent=2)
|
62
60
|
|
61
|
+
if os.path.exists(WATCH_DIR):
|
62
|
+
shutil.rmtree(WATCH_DIR)
|
63
|
+
os.makedirs(WATCH_DIR)
|
64
|
+
|
65
|
+
try:
|
66
|
+
with open(CFG_PATH, "r") as f:
|
67
|
+
config = json.load(f)
|
68
|
+
DEBUG = config.get("DEBUG", DEBUG)
|
69
|
+
LLM_MODEL = config.get("LLM_MODEL", LLM_MODEL)
|
70
|
+
NUM_TOKEN = config.get("NUM_TOKEN", NUM_TOKEN)
|
71
|
+
SEP_CMD = config.get("SEP_CMD", SEP_CMD)
|
72
|
+
except Exception as e:
|
73
|
+
tolog(str(e))
|
74
|
+
with open(CFG_PATH, 'w') as f:
|
75
|
+
json.dump(dict(DEBUG=DEBUG, LLM_MODEL=LLM_MODEL, NUM_TOKEN=NUM_TOKEN, SEP_CMD=SEP_CMD), f, indent=2)
|
76
|
+
|
63
77
|
toout('Loading LLM...')
|
64
|
-
|
65
|
-
|
78
|
+
if LLM_MODEL is None:
|
79
|
+
from nanollama32 import Chat
|
80
|
+
chat = Chat(variant='uncn_llama_32_3b_it')
|
81
|
+
toout('LLM is ready')
|
82
|
+
else:
|
83
|
+
from mlx_lm_utils import Chat
|
84
|
+
chat = Chat(model_path=LLM_MODEL)
|
85
|
+
toout(f'{LLM_MODEL.split('/')[-1]} is ready')
|
66
86
|
|
67
87
|
def is_binary(file_path):
|
68
88
|
try:
|
@@ -102,11 +122,11 @@ def split_str(doc, max_len=2000, get_len=len):
|
|
102
122
|
chunks.append("".join(current_chunk))
|
103
123
|
return chunks
|
104
124
|
|
105
|
-
def
|
125
|
+
def retrieve(src_path, max_len=2000, get_len=len):
|
106
126
|
src_path = os.path.expanduser(src_path)
|
107
127
|
result = {}
|
108
128
|
if not os.path.exists(src_path):
|
109
|
-
tolog(f"The path {src_path} does not exist.", '
|
129
|
+
tolog(f"The path {src_path} does not exist.", 'retrieve')
|
110
130
|
return result
|
111
131
|
if os.path.isfile(src_path):
|
112
132
|
try:
|
@@ -114,7 +134,7 @@ def get_context(src_path, max_len=2000, get_len=len):
|
|
114
134
|
content = f.read()
|
115
135
|
result = {src_path:dict(timestamp=os.path.getmtime(src_path), list_str=split_str(content, max_len=max_len, get_len=get_len))}
|
116
136
|
except Exception as e:
|
117
|
-
tolog(f'Skipped {filename} due to {e}', '
|
137
|
+
tolog(f'Skipped {filename} due to {e}', 'retrieve')
|
118
138
|
return result
|
119
139
|
for filename in os.listdir(src_path):
|
120
140
|
try:
|
@@ -126,20 +146,17 @@ def get_context(src_path, max_len=2000, get_len=len):
|
|
126
146
|
content = f.read()
|
127
147
|
result[file_path] = dict(timestamp=os.path.getmtime(file_path), list_str=split_str(content, max_len=max_len, get_len=get_len))
|
128
148
|
except Exception as e:
|
129
|
-
tolog(f'Skipped {filename} due to {e}', '
|
149
|
+
tolog(f'Skipped {filename} due to {e}', 'retrieve')
|
130
150
|
continue
|
131
151
|
return result
|
132
152
|
|
133
|
-
def get_ntok(s):
|
134
|
-
return len(chat.tokenizer.encode(s)[0])
|
135
|
-
|
136
153
|
def ingest(src):
|
137
|
-
def load_cache(cache_path=
|
154
|
+
def load_cache(cache_path=LTM_PATH):
|
138
155
|
if os.path.exists(cache_path):
|
139
156
|
with open(cache_path, 'r', encoding='utf-8') as f:
|
140
157
|
return json.load(f)
|
141
158
|
return {}
|
142
|
-
def dump_cache(new_data, cache_path=
|
159
|
+
def dump_cache(new_data, cache_path=LTM_PATH):
|
143
160
|
current_data = load_cache(cache_path)
|
144
161
|
for k, v in new_data.items():
|
145
162
|
if k not in current_data or v['timestamp'] > current_data[k]['timestamp']:
|
@@ -149,7 +166,7 @@ def ingest(src):
|
|
149
166
|
toout('Ingesting...')
|
150
167
|
format_ingest = '{volat}{incoming}\n\n---\n\nPlease provide a succint bullet point summary for above:'
|
151
168
|
format_volat = 'Here is a summary of part 1 of **{k}**:\n\n---\n\n{newsum}\n\n---\n\nHere is the next part:\n\n---\n\n'
|
152
|
-
dict_doc =
|
169
|
+
dict_doc = retrieve(src, get_len=chat.get_ntok)
|
153
170
|
dict_sum = {}
|
154
171
|
cache = load_cache()
|
155
172
|
for k, v in dict_doc.items():
|
@@ -166,14 +183,14 @@ def ingest(src):
|
|
166
183
|
accum = ''
|
167
184
|
for s in list_str:
|
168
185
|
chat.reset()
|
169
|
-
newsum = chat(format_ingest.format(volat=volat, incoming=s.strip()), max_new=max_new_sum, verbose=False, stream=None)[
|
186
|
+
newsum = chat(format_ingest.format(volat=volat, incoming=s.strip()), max_new=max_new_sum, verbose=False, stream=None)['text']
|
170
187
|
accum += newsum + ' ...\n'
|
171
188
|
volat = format_volat.format(k=k, newsum=newsum)
|
172
189
|
else:
|
173
190
|
accum = list_str[0]
|
174
191
|
chat.reset()
|
175
192
|
toout('')
|
176
|
-
chat_summary = chat(format_ingest.format(volat=f'**{k}**:\n', incoming=accum), max_new=int(NUM_TOKEN/4), verbose=False, stream=OUT_PATH)[
|
193
|
+
chat_summary = chat(format_ingest.format(volat=f'**{k}**:\n', incoming=accum), max_new=int(NUM_TOKEN/4), verbose=False, stream=OUT_PATH)['text']
|
177
194
|
dict_sum[k] = dict(timestamp=v_stamp, summary=chat_summary)
|
178
195
|
dump_cache(dict_sum)
|
179
196
|
result = ''
|
@@ -197,10 +214,10 @@ async def monitor_directory():
|
|
197
214
|
async for changes in awatch(WATCH_DIR):
|
198
215
|
found_files = {os.path.basename(f) for _, f in changes}
|
199
216
|
tolog(f'{found_files=}') # DEBUG
|
200
|
-
if
|
217
|
+
if IN_FILES[-1] in found_files and set(IN_FILES).issubset(set(os.listdir(WATCH_DIR))):
|
201
218
|
tolog(f'listdir()={os.listdir(WATCH_DIR)}') # DEBUG
|
202
219
|
data = {}
|
203
|
-
for file in
|
220
|
+
for file in IN_FILES:
|
204
221
|
path = os.path.join(WATCH_DIR, file)
|
205
222
|
with open(path, 'r', encoding='utf-8') as f:
|
206
223
|
data[file] = f.read().strip()
|
@@ -241,8 +258,9 @@ async def process_files(data):
|
|
241
258
|
prompt = str_template.format(**data)
|
242
259
|
tolog(prompt, 'tollm')
|
243
260
|
toout('')
|
244
|
-
response = chat(prompt, max_new=NUM_TOKEN - get_ntok(prompt), verbose=False, stream=OUT_PATH)
|
245
|
-
toout(response)
|
261
|
+
response = chat(prompt, max_new=NUM_TOKEN - chat.get_ntok(prompt), verbose=False, stream=OUT_PATH)
|
262
|
+
toout(response['text'])
|
263
|
+
tolog(response['benchmark'])
|
246
264
|
|
247
265
|
VIMLMSCRIPT = Template(r"""
|
248
266
|
let s:watched_dir = expand('$WATCH_DIR')
|
vimlm-0.0.3/PKG-INFO
DELETED
@@ -1,84 +0,0 @@
|
|
1
|
-
Metadata-Version: 2.2
|
2
|
-
Name: vimlm
|
3
|
-
Version: 0.0.3
|
4
|
-
Summary: VimLM - LLM-powered Vim assistant
|
5
|
-
Home-page: https://github.com/JosefAlbers/vimlm
|
6
|
-
Author: Josef Albers
|
7
|
-
Author-email: albersj66@gmail.com
|
8
|
-
Requires-Python: >=3.13.1
|
9
|
-
Description-Content-Type: text/markdown
|
10
|
-
License-File: LICENSE
|
11
|
-
Requires-Dist: nanollama==0.0.3
|
12
|
-
Requires-Dist: watchfiles==1.0.4
|
13
|
-
Dynamic: author
|
14
|
-
Dynamic: author-email
|
15
|
-
Dynamic: description
|
16
|
-
Dynamic: description-content-type
|
17
|
-
Dynamic: home-page
|
18
|
-
Dynamic: requires-dist
|
19
|
-
Dynamic: requires-python
|
20
|
-
Dynamic: summary
|
21
|
-
|
22
|
-
# VimLM - Vim Language Model Assistant for privacy-conscious developers
|
23
|
-
|
24
|
-

|
25
|
-
|
26
|
-
VimLM brings AI-powered assistance directly into Vim, keeping your workflow smooth and uninterrupted. Instead of constantly switching between your editor and external tools, VimLM provides real-time, context-aware suggestions, explanations, and code insights—all within Vim.
|
27
|
-
|
28
|
-
Unlike proprietary cloud-based AI models, VimLM runs entirely offline, ensuring complete privacy, data security, and control. You’re not just using a tool—you own it. Fine-tune, modify, or extend it as needed, without reliance on big tech platforms.
|
29
|
-
|
30
|
-
## Features
|
31
|
-
|
32
|
-
- **Real-Time Interaction with local LLMs**: Runs **fully offline** with local models (default: uncensored Llama-3.2-3B).
|
33
|
-
- **Integration with Vim's Native Workflow**: Simple Vim keybindings for quick access and split-window interface for non-intrusive responses.
|
34
|
-
- **Context-Awareness Beyond Single Files**: Inline support for external documents and project files for richer, more informed responses.
|
35
|
-
- **Conversational AI Assistance**: Goes beyond simple code suggestions-explains reasoning, provides alternative solutions, and allows interactive follow-ups.
|
36
|
-
- **Versatile Use Cases**: Not just for coding-use it for documentation lookup, general Q&A, or even casual (uncensored) conversations.
|
37
|
-
|
38
|
-
## Installation
|
39
|
-
|
40
|
-
```zsh
|
41
|
-
pip install vimlm
|
42
|
-
```
|
43
|
-
|
44
|
-
## Usage
|
45
|
-
|
46
|
-
1. Start Vim with VimLM:
|
47
|
-
|
48
|
-
```zsh
|
49
|
-
vimlm
|
50
|
-
```
|
51
|
-
|
52
|
-
or
|
53
|
-
|
54
|
-
```zsh
|
55
|
-
vimlm your_file.js
|
56
|
-
```
|
57
|
-
|
58
|
-
2. **From Normal Mode**:
|
59
|
-
- `Ctrl-l`: Send current line + file context
|
60
|
-
- Example prompt: "Regex for removing html tags in item.content"
|
61
|
-
|
62
|
-
3. **From Visual Mode**:
|
63
|
-
- Select text → `Ctrl-l`: Send selection + file context
|
64
|
-
- Example prompt: "Convert this to async/await syntax"
|
65
|
-
|
66
|
-
4. **Add Context**: Use `!@#$` to include additional files/folders:
|
67
|
-
- `!@#$` (no path): Current folder
|
68
|
-
- `!@#$ ~/scrap/jph00/hypermedia-applications.summ.md`: Specific folder
|
69
|
-
- `!@#$ ~/wtm/utils.py`: Specific file
|
70
|
-
- Example prompt: "AJAX-ify this app !@#$ ~/scrap/jph00/hypermedia-applications.summ.md"
|
71
|
-
|
72
|
-
5. **Follow-Up**: After initial response:
|
73
|
-
- `Ctrl-R`: Continue thread
|
74
|
-
- Example follow-up: "In Manifest V3"
|
75
|
-
|
76
|
-
## Key Bindings
|
77
|
-
|
78
|
-
| Binding | Mode | Action |
|
79
|
-
|------------|---------------|----------------------------------------|
|
80
|
-
| `Ctrl-L` | Normal/Visual | Send current file + selection to LLM |
|
81
|
-
| `Ctrl-R` | Normal | Continue conversation |
|
82
|
-
| `Esc` | Prompt | Cancel input |
|
83
|
-
|
84
|
-
|
vimlm-0.0.3/README.md
DELETED
@@ -1,63 +0,0 @@
|
|
1
|
-
# VimLM - Vim Language Model Assistant for privacy-conscious developers
|
2
|
-
|
3
|
-

|
4
|
-
|
5
|
-
VimLM brings AI-powered assistance directly into Vim, keeping your workflow smooth and uninterrupted. Instead of constantly switching between your editor and external tools, VimLM provides real-time, context-aware suggestions, explanations, and code insights—all within Vim.
|
6
|
-
|
7
|
-
Unlike proprietary cloud-based AI models, VimLM runs entirely offline, ensuring complete privacy, data security, and control. You’re not just using a tool—you own it. Fine-tune, modify, or extend it as needed, without reliance on big tech platforms.
|
8
|
-
|
9
|
-
## Features
|
10
|
-
|
11
|
-
- **Real-Time Interaction with local LLMs**: Runs **fully offline** with local models (default: uncensored Llama-3.2-3B).
|
12
|
-
- **Integration with Vim's Native Workflow**: Simple Vim keybindings for quick access and split-window interface for non-intrusive responses.
|
13
|
-
- **Context-Awareness Beyond Single Files**: Inline support for external documents and project files for richer, more informed responses.
|
14
|
-
- **Conversational AI Assistance**: Goes beyond simple code suggestions-explains reasoning, provides alternative solutions, and allows interactive follow-ups.
|
15
|
-
- **Versatile Use Cases**: Not just for coding-use it for documentation lookup, general Q&A, or even casual (uncensored) conversations.
|
16
|
-
|
17
|
-
## Installation
|
18
|
-
|
19
|
-
```zsh
|
20
|
-
pip install vimlm
|
21
|
-
```
|
22
|
-
|
23
|
-
## Usage
|
24
|
-
|
25
|
-
1. Start Vim with VimLM:
|
26
|
-
|
27
|
-
```zsh
|
28
|
-
vimlm
|
29
|
-
```
|
30
|
-
|
31
|
-
or
|
32
|
-
|
33
|
-
```zsh
|
34
|
-
vimlm your_file.js
|
35
|
-
```
|
36
|
-
|
37
|
-
2. **From Normal Mode**:
|
38
|
-
- `Ctrl-l`: Send current line + file context
|
39
|
-
- Example prompt: "Regex for removing html tags in item.content"
|
40
|
-
|
41
|
-
3. **From Visual Mode**:
|
42
|
-
- Select text → `Ctrl-l`: Send selection + file context
|
43
|
-
- Example prompt: "Convert this to async/await syntax"
|
44
|
-
|
45
|
-
4. **Add Context**: Use `!@#$` to include additional files/folders:
|
46
|
-
- `!@#$` (no path): Current folder
|
47
|
-
- `!@#$ ~/scrap/jph00/hypermedia-applications.summ.md`: Specific folder
|
48
|
-
- `!@#$ ~/wtm/utils.py`: Specific file
|
49
|
-
- Example prompt: "AJAX-ify this app !@#$ ~/scrap/jph00/hypermedia-applications.summ.md"
|
50
|
-
|
51
|
-
5. **Follow-Up**: After initial response:
|
52
|
-
- `Ctrl-R`: Continue thread
|
53
|
-
- Example follow-up: "In Manifest V3"
|
54
|
-
|
55
|
-
## Key Bindings
|
56
|
-
|
57
|
-
| Binding | Mode | Action |
|
58
|
-
|------------|---------------|----------------------------------------|
|
59
|
-
| `Ctrl-L` | Normal/Visual | Send current file + selection to LLM |
|
60
|
-
| `Ctrl-R` | Normal | Continue conversation |
|
61
|
-
| `Esc` | Prompt | Cancel input |
|
62
|
-
|
63
|
-
|
@@ -1,84 +0,0 @@
|
|
1
|
-
Metadata-Version: 2.2
|
2
|
-
Name: vimlm
|
3
|
-
Version: 0.0.3
|
4
|
-
Summary: VimLM - LLM-powered Vim assistant
|
5
|
-
Home-page: https://github.com/JosefAlbers/vimlm
|
6
|
-
Author: Josef Albers
|
7
|
-
Author-email: albersj66@gmail.com
|
8
|
-
Requires-Python: >=3.13.1
|
9
|
-
Description-Content-Type: text/markdown
|
10
|
-
License-File: LICENSE
|
11
|
-
Requires-Dist: nanollama==0.0.3
|
12
|
-
Requires-Dist: watchfiles==1.0.4
|
13
|
-
Dynamic: author
|
14
|
-
Dynamic: author-email
|
15
|
-
Dynamic: description
|
16
|
-
Dynamic: description-content-type
|
17
|
-
Dynamic: home-page
|
18
|
-
Dynamic: requires-dist
|
19
|
-
Dynamic: requires-python
|
20
|
-
Dynamic: summary
|
21
|
-
|
22
|
-
# VimLM - Vim Language Model Assistant for privacy-conscious developers
|
23
|
-
|
24
|
-

|
25
|
-
|
26
|
-
VimLM brings AI-powered assistance directly into Vim, keeping your workflow smooth and uninterrupted. Instead of constantly switching between your editor and external tools, VimLM provides real-time, context-aware suggestions, explanations, and code insights—all within Vim.
|
27
|
-
|
28
|
-
Unlike proprietary cloud-based AI models, VimLM runs entirely offline, ensuring complete privacy, data security, and control. You’re not just using a tool—you own it. Fine-tune, modify, or extend it as needed, without reliance on big tech platforms.
|
29
|
-
|
30
|
-
## Features
|
31
|
-
|
32
|
-
- **Real-Time Interaction with local LLMs**: Runs **fully offline** with local models (default: uncensored Llama-3.2-3B).
|
33
|
-
- **Integration with Vim's Native Workflow**: Simple Vim keybindings for quick access and split-window interface for non-intrusive responses.
|
34
|
-
- **Context-Awareness Beyond Single Files**: Inline support for external documents and project files for richer, more informed responses.
|
35
|
-
- **Conversational AI Assistance**: Goes beyond simple code suggestions-explains reasoning, provides alternative solutions, and allows interactive follow-ups.
|
36
|
-
- **Versatile Use Cases**: Not just for coding-use it for documentation lookup, general Q&A, or even casual (uncensored) conversations.
|
37
|
-
|
38
|
-
## Installation
|
39
|
-
|
40
|
-
```zsh
|
41
|
-
pip install vimlm
|
42
|
-
```
|
43
|
-
|
44
|
-
## Usage
|
45
|
-
|
46
|
-
1. Start Vim with VimLM:
|
47
|
-
|
48
|
-
```zsh
|
49
|
-
vimlm
|
50
|
-
```
|
51
|
-
|
52
|
-
or
|
53
|
-
|
54
|
-
```zsh
|
55
|
-
vimlm your_file.js
|
56
|
-
```
|
57
|
-
|
58
|
-
2. **From Normal Mode**:
|
59
|
-
- `Ctrl-l`: Send current line + file context
|
60
|
-
- Example prompt: "Regex for removing html tags in item.content"
|
61
|
-
|
62
|
-
3. **From Visual Mode**:
|
63
|
-
- Select text → `Ctrl-l`: Send selection + file context
|
64
|
-
- Example prompt: "Convert this to async/await syntax"
|
65
|
-
|
66
|
-
4. **Add Context**: Use `!@#$` to include additional files/folders:
|
67
|
-
- `!@#$` (no path): Current folder
|
68
|
-
- `!@#$ ~/scrap/jph00/hypermedia-applications.summ.md`: Specific folder
|
69
|
-
- `!@#$ ~/wtm/utils.py`: Specific file
|
70
|
-
- Example prompt: "AJAX-ify this app !@#$ ~/scrap/jph00/hypermedia-applications.summ.md"
|
71
|
-
|
72
|
-
5. **Follow-Up**: After initial response:
|
73
|
-
- `Ctrl-R`: Continue thread
|
74
|
-
- Example follow-up: "In Manifest V3"
|
75
|
-
|
76
|
-
## Key Bindings
|
77
|
-
|
78
|
-
| Binding | Mode | Action |
|
79
|
-
|------------|---------------|----------------------------------------|
|
80
|
-
| `Ctrl-L` | Normal/Visual | Send current file + selection to LLM |
|
81
|
-
| `Ctrl-R` | Normal | Continue conversation |
|
82
|
-
| `Esc` | Prompt | Cancel input |
|
83
|
-
|
84
|
-
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|