ipyai 0.0.1__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- ipyai-0.0.1/PKG-INFO +158 -0
- ipyai-0.0.1/README.md +139 -0
- ipyai-0.0.1/ipyai/__init__.py +3 -0
- ipyai-0.0.1/ipyai/core.py +444 -0
- ipyai-0.0.1/ipyai.egg-info/PKG-INFO +158 -0
- ipyai-0.0.1/ipyai.egg-info/SOURCES.txt +10 -0
- ipyai-0.0.1/ipyai.egg-info/dependency_links.txt +1 -0
- ipyai-0.0.1/ipyai.egg-info/requires.txt +10 -0
- ipyai-0.0.1/ipyai.egg-info/top_level.txt +1 -0
- ipyai-0.0.1/pyproject.toml +30 -0
- ipyai-0.0.1/setup.cfg +4 -0
- ipyai-0.0.1/tests/test_core.py +348 -0
ipyai-0.0.1/PKG-INFO
ADDED
|
@@ -0,0 +1,158 @@
|
|
|
1
|
+
Metadata-Version: 2.4
|
|
2
|
+
Name: ipyai
|
|
3
|
+
Version: 0.0.1
|
|
4
|
+
Summary: Minimal IPython backtick-to-AI extension
|
|
5
|
+
Author-email: Jeremy Howard <info@answer.ai>
|
|
6
|
+
License: Apache-2.0
|
|
7
|
+
Project-URL: Source, https://github.com/answerdotai/fastship
|
|
8
|
+
Requires-Python: >=3.10
|
|
9
|
+
Description-Content-Type: text/markdown
|
|
10
|
+
Requires-Dist: fastcore
|
|
11
|
+
Requires-Dist: ipython>=9
|
|
12
|
+
Requires-Dist: lisette>=0.0.47
|
|
13
|
+
Requires-Dist: rich
|
|
14
|
+
Provides-Extra: dev
|
|
15
|
+
Requires-Dist: build>=1.0.0; extra == "dev"
|
|
16
|
+
Requires-Dist: fastship>=0.0.6; extra == "dev"
|
|
17
|
+
Requires-Dist: pytest; extra == "dev"
|
|
18
|
+
Requires-Dist: twine>=5.0.0; extra == "dev"
|
|
19
|
+
|
|
20
|
+
# ipyai
|
|
21
|
+
|
|
22
|
+
`ipyai` is an IPython extension that turns any input starting with `` ` `` into an AI prompt.
|
|
23
|
+
|
|
24
|
+
It is aimed at terminal IPython, not notebook frontends. Prompts stream through `lisette`, final output is rendered with `rich`, and prompt history is stored alongside normal IPython history in the same SQLite database.
|
|
25
|
+
|
|
26
|
+
## Install
|
|
27
|
+
|
|
28
|
+
```bash
|
|
29
|
+
pip install ipyai
|
|
30
|
+
```
|
|
31
|
+
|
|
32
|
+
## Load
|
|
33
|
+
|
|
34
|
+
```python
|
|
35
|
+
%load_ext ipyai
|
|
36
|
+
```
|
|
37
|
+
|
|
38
|
+
If you change the package in a running shell:
|
|
39
|
+
|
|
40
|
+
```python
|
|
41
|
+
%reload_ext ipyai
|
|
42
|
+
```
|
|
43
|
+
|
|
44
|
+
## How To Auto-Load `ipyai`
|
|
45
|
+
|
|
46
|
+
`ipyai` is designed for terminal IPython. To auto-load it, add this to an `ipython_config.py` file used by terminal `ipython`:
|
|
47
|
+
|
|
48
|
+
```python
|
|
49
|
+
c.TerminalIPythonApp.extensions = ["ipyai"]
|
|
50
|
+
```
|
|
51
|
+
|
|
52
|
+
Good places for that file include:
|
|
53
|
+
|
|
54
|
+
- env-local: `{sys.prefix}/etc/ipython/ipython_config.py`
|
|
55
|
+
- user-local: `~/.ipython/profile_default/ipython_config.py`
|
|
56
|
+
- system-wide IPython config directories
|
|
57
|
+
|
|
58
|
+
In a virtualenv, the env-local path is usually:
|
|
59
|
+
|
|
60
|
+
- `.venv/etc/ipython/ipython_config.py`
|
|
61
|
+
|
|
62
|
+
To see which config paths your current `ipython` is searching, run:
|
|
63
|
+
|
|
64
|
+
```bash
|
|
65
|
+
ipython --debug -c 'exit()' 2>&1 | grep Searching
|
|
66
|
+
```
|
|
67
|
+
|
|
68
|
+
## Usage
|
|
69
|
+
|
|
70
|
+
Only the leading backtick is special. There is no closing delimiter.
|
|
71
|
+
|
|
72
|
+
Single line:
|
|
73
|
+
|
|
74
|
+
```python
|
|
75
|
+
`write a haiku about sqlite
|
|
76
|
+
```
|
|
77
|
+
|
|
78
|
+
Multiline paste:
|
|
79
|
+
|
|
80
|
+
```python
|
|
81
|
+
`summarize this module:
|
|
82
|
+
focus on state management
|
|
83
|
+
and persistence behavior
|
|
84
|
+
```
|
|
85
|
+
|
|
86
|
+
Backslash-Enter continuation in the terminal:
|
|
87
|
+
|
|
88
|
+
```python
|
|
89
|
+
`draft a migration plan \
|
|
90
|
+
with risks and rollback steps
|
|
91
|
+
```
|
|
92
|
+
|
|
93
|
+
`ipyai` also provides a line and cell magic named `%ipyai` / `%%ipyai`.
|
|
94
|
+
|
|
95
|
+
## `%ipyai` commands
|
|
96
|
+
|
|
97
|
+
```python
|
|
98
|
+
%ipyai
|
|
99
|
+
%ipyai model claude-sonnet-4-6
|
|
100
|
+
%ipyai think m
|
|
101
|
+
%ipyai search h
|
|
102
|
+
%ipyai code_theme monokai
|
|
103
|
+
%ipyai reset
|
|
104
|
+
```
|
|
105
|
+
|
|
106
|
+
Behavior:
|
|
107
|
+
|
|
108
|
+
- `%ipyai` prints the active model, think level, search level, code theme, config path, and system prompt path
|
|
109
|
+
- `%ipyai model ...`, `%ipyai think ...`, `%ipyai search ...`, `%ipyai code_theme ...` change the current session only
|
|
110
|
+
- `%ipyai reset` deletes AI prompt history for the current IPython session and resets the code-context baseline
|
|
111
|
+
|
|
112
|
+
## Tools
|
|
113
|
+
|
|
114
|
+
To expose a function from the active IPython namespace as a tool for the current conversation, reference it as `&\`name\`` in the prompt:
|
|
115
|
+
|
|
116
|
+
```python
|
|
117
|
+
def weather(city): return f"Sunny in {city}"
|
|
118
|
+
|
|
119
|
+
`use &`weather` to answer the question about Brisbane
|
|
120
|
+
```
|
|
121
|
+
|
|
122
|
+
## Output Rendering
|
|
123
|
+
|
|
124
|
+
Responses are streamed directly to the terminal during generation. After streaming completes:
|
|
125
|
+
|
|
126
|
+
- the stored response remains the original full `lisette` output
|
|
127
|
+
- the visible terminal output is re-rendered with `rich.Markdown`
|
|
128
|
+
- tool call detail blocks are compacted to a short single-line form such as `🔧 f(x=1) => 2`
|
|
129
|
+
|
|
130
|
+
## Configuration
|
|
131
|
+
|
|
132
|
+
On first load, `ipyai` creates two files under the XDG config directory:
|
|
133
|
+
|
|
134
|
+
- `~/.config/ipyai/config.json`
|
|
135
|
+
- `~/.config/ipyai/sysp.txt`
|
|
136
|
+
|
|
137
|
+
`config.json` currently supports:
|
|
138
|
+
|
|
139
|
+
```json
|
|
140
|
+
{
|
|
141
|
+
"model": "claude-sonnet-4-6",
|
|
142
|
+
"think": "l",
|
|
143
|
+
"search": "l",
|
|
144
|
+
"code_theme": "monokai"
|
|
145
|
+
}
|
|
146
|
+
```
|
|
147
|
+
|
|
148
|
+
Notes:
|
|
149
|
+
|
|
150
|
+
- `model` defaults from `IPYAI_MODEL` if that environment variable is set when the config file is first created
|
|
151
|
+
- `think` and `search` must be one of `l`, `m`, or `h`
|
|
152
|
+
- `code_theme` is passed to Rich for fenced and inline code styling
|
|
153
|
+
|
|
154
|
+
`sysp.txt` is used as the system prompt passed to `lisette.Chat`.
|
|
155
|
+
|
|
156
|
+
## Development
|
|
157
|
+
|
|
158
|
+
See [DEV.md](DEV.md) for project layout, architecture, persistence details, and development workflow.
|
ipyai-0.0.1/README.md
ADDED
|
@@ -0,0 +1,139 @@
|
|
|
1
|
+
# ipyai
|
|
2
|
+
|
|
3
|
+
`ipyai` is an IPython extension that turns any input starting with `` ` `` into an AI prompt.
|
|
4
|
+
|
|
5
|
+
It is aimed at terminal IPython, not notebook frontends. Prompts stream through `lisette`, final output is rendered with `rich`, and prompt history is stored alongside normal IPython history in the same SQLite database.
|
|
6
|
+
|
|
7
|
+
## Install
|
|
8
|
+
|
|
9
|
+
```bash
|
|
10
|
+
pip install ipyai
|
|
11
|
+
```
|
|
12
|
+
|
|
13
|
+
## Load
|
|
14
|
+
|
|
15
|
+
```python
|
|
16
|
+
%load_ext ipyai
|
|
17
|
+
```
|
|
18
|
+
|
|
19
|
+
If you change the package in a running shell:
|
|
20
|
+
|
|
21
|
+
```python
|
|
22
|
+
%reload_ext ipyai
|
|
23
|
+
```
|
|
24
|
+
|
|
25
|
+
## How To Auto-Load `ipyai`
|
|
26
|
+
|
|
27
|
+
`ipyai` is designed for terminal IPython. To auto-load it, add this to an `ipython_config.py` file used by terminal `ipython`:
|
|
28
|
+
|
|
29
|
+
```python
|
|
30
|
+
c.TerminalIPythonApp.extensions = ["ipyai"]
|
|
31
|
+
```
|
|
32
|
+
|
|
33
|
+
Good places for that file include:
|
|
34
|
+
|
|
35
|
+
- env-local: `{sys.prefix}/etc/ipython/ipython_config.py`
|
|
36
|
+
- user-local: `~/.ipython/profile_default/ipython_config.py`
|
|
37
|
+
- system-wide IPython config directories
|
|
38
|
+
|
|
39
|
+
In a virtualenv, the env-local path is usually:
|
|
40
|
+
|
|
41
|
+
- `.venv/etc/ipython/ipython_config.py`
|
|
42
|
+
|
|
43
|
+
To see which config paths your current `ipython` is searching, run:
|
|
44
|
+
|
|
45
|
+
```bash
|
|
46
|
+
ipython --debug -c 'exit()' 2>&1 | grep Searching
|
|
47
|
+
```
|
|
48
|
+
|
|
49
|
+
## Usage
|
|
50
|
+
|
|
51
|
+
Only the leading backtick is special. There is no closing delimiter.
|
|
52
|
+
|
|
53
|
+
Single line:
|
|
54
|
+
|
|
55
|
+
```python
|
|
56
|
+
`write a haiku about sqlite
|
|
57
|
+
```
|
|
58
|
+
|
|
59
|
+
Multiline paste:
|
|
60
|
+
|
|
61
|
+
```python
|
|
62
|
+
`summarize this module:
|
|
63
|
+
focus on state management
|
|
64
|
+
and persistence behavior
|
|
65
|
+
```
|
|
66
|
+
|
|
67
|
+
Backslash-Enter continuation in the terminal:
|
|
68
|
+
|
|
69
|
+
```python
|
|
70
|
+
`draft a migration plan \
|
|
71
|
+
with risks and rollback steps
|
|
72
|
+
```
|
|
73
|
+
|
|
74
|
+
`ipyai` also provides a line and cell magic named `%ipyai` / `%%ipyai`.
|
|
75
|
+
|
|
76
|
+
## `%ipyai` commands
|
|
77
|
+
|
|
78
|
+
```python
|
|
79
|
+
%ipyai
|
|
80
|
+
%ipyai model claude-sonnet-4-6
|
|
81
|
+
%ipyai think m
|
|
82
|
+
%ipyai search h
|
|
83
|
+
%ipyai code_theme monokai
|
|
84
|
+
%ipyai reset
|
|
85
|
+
```
|
|
86
|
+
|
|
87
|
+
Behavior:
|
|
88
|
+
|
|
89
|
+
- `%ipyai` prints the active model, think level, search level, code theme, config path, and system prompt path
|
|
90
|
+
- `%ipyai model ...`, `%ipyai think ...`, `%ipyai search ...`, `%ipyai code_theme ...` change the current session only
|
|
91
|
+
- `%ipyai reset` deletes AI prompt history for the current IPython session and resets the code-context baseline
|
|
92
|
+
|
|
93
|
+
## Tools
|
|
94
|
+
|
|
95
|
+
To expose a function from the active IPython namespace as a tool for the current conversation, reference it as `&\`name\`` in the prompt:
|
|
96
|
+
|
|
97
|
+
```python
|
|
98
|
+
def weather(city): return f"Sunny in {city}"
|
|
99
|
+
|
|
100
|
+
`use &`weather` to answer the question about Brisbane
|
|
101
|
+
```
|
|
102
|
+
|
|
103
|
+
## Output Rendering
|
|
104
|
+
|
|
105
|
+
Responses are streamed directly to the terminal during generation. After streaming completes:
|
|
106
|
+
|
|
107
|
+
- the stored response remains the original full `lisette` output
|
|
108
|
+
- the visible terminal output is re-rendered with `rich.Markdown`
|
|
109
|
+
- tool call detail blocks are compacted to a short single-line form such as `🔧 f(x=1) => 2`
|
|
110
|
+
|
|
111
|
+
## Configuration
|
|
112
|
+
|
|
113
|
+
On first load, `ipyai` creates two files under the XDG config directory:
|
|
114
|
+
|
|
115
|
+
- `~/.config/ipyai/config.json`
|
|
116
|
+
- `~/.config/ipyai/sysp.txt`
|
|
117
|
+
|
|
118
|
+
`config.json` currently supports:
|
|
119
|
+
|
|
120
|
+
```json
|
|
121
|
+
{
|
|
122
|
+
"model": "claude-sonnet-4-6",
|
|
123
|
+
"think": "l",
|
|
124
|
+
"search": "l",
|
|
125
|
+
"code_theme": "monokai"
|
|
126
|
+
}
|
|
127
|
+
```
|
|
128
|
+
|
|
129
|
+
Notes:
|
|
130
|
+
|
|
131
|
+
- `model` defaults from `IPYAI_MODEL` if that environment variable is set when the config file is first created
|
|
132
|
+
- `think` and `search` must be one of `l`, `m`, or `h`
|
|
133
|
+
- `code_theme` is passed to Rich for fenced and inline code styling
|
|
134
|
+
|
|
135
|
+
`sysp.txt` is used as the system prompt passed to `lisette.Chat`.
|
|
136
|
+
|
|
137
|
+
## Development
|
|
138
|
+
|
|
139
|
+
See [DEV.md](DEV.md) for project layout, architecture, persistence details, and development workflow.
|
|
@@ -0,0 +1,444 @@
|
|
|
1
|
+
from __future__ import annotations
|
|
2
|
+
|
|
3
|
+
import html,json,os,re,sqlite3,sys
|
|
4
|
+
from dataclasses import dataclass,field
|
|
5
|
+
from functools import partial
|
|
6
|
+
from pathlib import Path
|
|
7
|
+
from typing import Callable
|
|
8
|
+
|
|
9
|
+
from fastcore.xdg import xdg_config_home
|
|
10
|
+
from IPython import get_ipython
|
|
11
|
+
from IPython.core.inputtransformer2 import leading_empty_lines
|
|
12
|
+
from IPython.core.magic import Magics, line_cell_magic, magics_class
|
|
13
|
+
from lisette.core import Chat,StreamFormatter
|
|
14
|
+
from rich.console import Console
|
|
15
|
+
from rich.markdown import Markdown
|
|
16
|
+
from rich.text import Text
|
|
17
|
+
|
|
18
|
+
DEFAULT_MODEL = "claude-sonnet-4-6"
|
|
19
|
+
DEFAULT_THINK = "l"
|
|
20
|
+
DEFAULT_SEARCH = "l"
|
|
21
|
+
DEFAULT_CODE_THEME = "monokai"
|
|
22
|
+
DEFAULT_SYSTEM_PROMPT = """You are an AI assistant running inside IPython.
|
|
23
|
+
|
|
24
|
+
The user interacts with you through `ipyai`, an IPython extension that turns input starting with a backtick into an AI prompt.
|
|
25
|
+
|
|
26
|
+
You may receive:
|
|
27
|
+
- a `<context>` XML block containing recent IPython code and optional outputs
|
|
28
|
+
- a `<user-request>` XML block containing the user's actual request
|
|
29
|
+
|
|
30
|
+
You can respond in Markdown. Your final visible output in terminal IPython will be rendered with Rich, so normal Markdown formatting, fenced code blocks, lists, and tables are appropriate when useful.
|
|
31
|
+
|
|
32
|
+
When the user mentions `&`-backtick tool references such as `&`tool_name``, the corresponding callable from the active IPython namespace may be available to you as a tool. Use tools when they will materially improve correctness or completeness; otherwise answer directly.
|
|
33
|
+
|
|
34
|
+
Assume you are helping an interactive Python user. Prefer concise, accurate, practical responses. When writing code, default to Python unless the user asks for something else.
|
|
35
|
+
"""
|
|
36
|
+
AI_MAGIC_NAME = "ipyai"
|
|
37
|
+
AI_LAST_PROMPT = "_ai_last_prompt"
|
|
38
|
+
AI_LAST_RESPONSE = "_ai_last_response"
|
|
39
|
+
AI_EXTENSION_NS = "_ipyai"
|
|
40
|
+
AI_EXTENSION_ATTR = "_ipyai_extension"
|
|
41
|
+
AI_RESET_LINE_NS = "_ipyai_reset_line"
|
|
42
|
+
AI_PROMPTS_TABLE = "ai_prompts"
|
|
43
|
+
|
|
44
|
+
__all__ = "AI_EXTENSION_ATTR AI_EXTENSION_NS AI_LAST_PROMPT AI_LAST_RESPONSE AI_MAGIC_NAME AI_PROMPTS_TABLE AI_RESET_LINE_NS DEFAULT_MODEL " \
|
|
45
|
+
"IPyAIExtension create_extension default_config_path default_sysp_path is_backtick_prompt load_ipython_extension prompt_from_lines " \
|
|
46
|
+
"stream_to_stdout transform_backticks unload_ipython_extension".split()
|
|
47
|
+
|
|
48
|
+
_prompt_template = """{context}<user-request>{prompt}</user-request>"""
|
|
49
|
+
_tool_re = re.compile(r"&`(\w+)`")
|
|
50
|
+
_tool_block_re = re.compile(r"<details class='tool-usage-details'>\s*<summary>(.*?)</summary>\s*```json\s*(.*?)\s*```\s*</details>", flags=re.DOTALL)
|
|
51
|
+
|
|
52
|
+
|
|
53
|
+
def is_backtick_prompt(lines: list[str]) -> bool: return bool(lines) and lines[0].startswith("`")
|
|
54
|
+
|
|
55
|
+
|
|
56
|
+
def _drop_continuation_backslashes(lines: list[str]) -> list[str]:
|
|
57
|
+
res = []
|
|
58
|
+
for i,line in enumerate(lines):
|
|
59
|
+
if i < len(lines)-1 and line.endswith("\\\n"): line = line[:-2] + "\n"
|
|
60
|
+
res.append(line)
|
|
61
|
+
return res
|
|
62
|
+
|
|
63
|
+
|
|
64
|
+
def prompt_from_lines(lines: list[str]) -> str | None:
|
|
65
|
+
if not is_backtick_prompt(lines): return None
|
|
66
|
+
first,*rest = lines
|
|
67
|
+
return "".join(_drop_continuation_backslashes([first[1:], *rest]))
|
|
68
|
+
|
|
69
|
+
|
|
70
|
+
def transform_backticks(lines: list[str], magic: str=AI_MAGIC_NAME) -> list[str]:
|
|
71
|
+
prompt = prompt_from_lines(lines)
|
|
72
|
+
if prompt is None: return lines
|
|
73
|
+
return [f"get_ipython().run_cell_magic({magic!r}, '', {prompt!r})\n"]
|
|
74
|
+
|
|
75
|
+
|
|
76
|
+
def _xml_attr(o) -> str: return html.escape(str(o), quote=True)
|
|
77
|
+
|
|
78
|
+
|
|
79
|
+
def _xml_text(o) -> str: return html.escape(str(o), quote=False)
|
|
80
|
+
|
|
81
|
+
|
|
82
|
+
def _tag(name: str, content="", **attrs) -> str:
|
|
83
|
+
ats = "".join(f' {k}="{_xml_attr(v)}"' for k,v in attrs.items())
|
|
84
|
+
return f"<{name}{ats}>{content}</{name}>"
|
|
85
|
+
|
|
86
|
+
|
|
87
|
+
def _is_ipyai_input(source: str) -> bool:
|
|
88
|
+
src = source.lstrip()
|
|
89
|
+
return src.startswith("`") or src.startswith("%ipyai") or src.startswith("%%ipyai")
|
|
90
|
+
|
|
91
|
+
|
|
92
|
+
def _tool_names(text: str) -> set[str]: return set(_tool_re.findall(text or ""))
|
|
93
|
+
|
|
94
|
+
|
|
95
|
+
def _tool_refs(prompt: str, hist: list[dict]) -> set[str]:
|
|
96
|
+
names = _tool_names(prompt)
|
|
97
|
+
for o in hist: names |= _tool_names(o["prompt"])
|
|
98
|
+
return names
|
|
99
|
+
|
|
100
|
+
|
|
101
|
+
def _single_line(s: str) -> str: return re.sub(r"\s+", " ", s.strip())
|
|
102
|
+
|
|
103
|
+
|
|
104
|
+
def _truncate_short(s: str, mx: int=100) -> str:
|
|
105
|
+
s = _single_line(s)
|
|
106
|
+
return s if len(s) <= mx else s[:mx-3] + "..."
|
|
107
|
+
|
|
108
|
+
|
|
109
|
+
def compact_tool_display(text: str, result_chars: int=100) -> str:
|
|
110
|
+
def _repl(m):
|
|
111
|
+
summary,payload = m.groups()
|
|
112
|
+
try: result = json.loads(payload).get("result", "")
|
|
113
|
+
except Exception: return m.group(0)
|
|
114
|
+
return f"🔧 {_single_line(summary)} => {_truncate_short(str(result), mx=result_chars)}"
|
|
115
|
+
return _tool_block_re.sub(_repl, text)
|
|
116
|
+
|
|
117
|
+
|
|
118
|
+
def _with_trailing_newline(text: str) -> str: return text if not text or text.endswith("\n") else text + "\n"
|
|
119
|
+
|
|
120
|
+
|
|
121
|
+
def _display_line_count(out, shown: str) -> int:
|
|
122
|
+
if not shown: return 0
|
|
123
|
+
console = Console(file=out, force_terminal=getattr(out, "isatty", lambda: False)(), soft_wrap=True)
|
|
124
|
+
lines = console.render_lines(Text(shown), pad=False)
|
|
125
|
+
return len(lines)
|
|
126
|
+
|
|
127
|
+
|
|
128
|
+
def _clear_terminal_block(out, shown: str):
|
|
129
|
+
nlines = _display_line_count(out, shown)
|
|
130
|
+
if nlines > 1: out.write(f"\x1b[{nlines-1}F")
|
|
131
|
+
out.write("\x1b[J")
|
|
132
|
+
out.flush()
|
|
133
|
+
|
|
134
|
+
|
|
135
|
+
def _render_markdown(out, text: str, code_theme: str, console_cls=Console, markdown_cls=Markdown):
|
|
136
|
+
md = markdown_cls(text, code_theme=code_theme, inline_code_theme=code_theme, inline_code_lexer="python")
|
|
137
|
+
console = console_cls(file=out, force_terminal=getattr(out, "isatty", lambda: False)(), soft_wrap=True)
|
|
138
|
+
console.print(md)
|
|
139
|
+
|
|
140
|
+
|
|
141
|
+
def _rewrite_terminal_output(out, shown: str, rewritten: str, code_theme: str, console_cls=Console, markdown_cls=Markdown):
|
|
142
|
+
if not getattr(out, "isatty", lambda: False)(): return
|
|
143
|
+
_clear_terminal_block(out, shown)
|
|
144
|
+
_render_markdown(out, rewritten, code_theme, console_cls=console_cls, markdown_cls=markdown_cls)
|
|
145
|
+
|
|
146
|
+
|
|
147
|
+
def stream_to_stdout(stream, formatter_cls: Callable[..., StreamFormatter]=StreamFormatter, out=None, code_theme: str=DEFAULT_CODE_THEME,
|
|
148
|
+
console_cls=Console, markdown_cls=Markdown) -> str:
|
|
149
|
+
out = sys.stdout if out is None else out
|
|
150
|
+
fmt = formatter_cls()
|
|
151
|
+
res = []
|
|
152
|
+
for chunk in fmt.format_stream(stream):
|
|
153
|
+
if not chunk: continue
|
|
154
|
+
out.write(chunk)
|
|
155
|
+
out.flush()
|
|
156
|
+
res.append(chunk)
|
|
157
|
+
text = "".join(res)
|
|
158
|
+
shown = _with_trailing_newline(text)
|
|
159
|
+
if shown != text:
|
|
160
|
+
out.write("\n")
|
|
161
|
+
out.flush()
|
|
162
|
+
rewritten = compact_tool_display(text)
|
|
163
|
+
if getattr(out, "isatty", lambda: False)(): _rewrite_terminal_output(out, shown, rewritten, code_theme, console_cls=console_cls, markdown_cls=markdown_cls)
|
|
164
|
+
return text
|
|
165
|
+
|
|
166
|
+
|
|
167
|
+
def default_config_path() -> Path: return xdg_config_home()/"ipyai"/"config.json"
|
|
168
|
+
|
|
169
|
+
|
|
170
|
+
def default_sysp_path() -> Path: return xdg_config_home()/"ipyai"/"sysp.txt"
|
|
171
|
+
|
|
172
|
+
|
|
173
|
+
def _validate_level(name: str, value: str, default: str) -> str:
|
|
174
|
+
value = (value or default).strip().lower()
|
|
175
|
+
if value not in {"l", "m", "h"}: raise ValueError(f"{name} must be one of h/m/l, got {value!r}")
|
|
176
|
+
return value
|
|
177
|
+
|
|
178
|
+
|
|
179
|
+
def _default_config():
|
|
180
|
+
return dict(model=os.environ.get("IPYAI_MODEL", DEFAULT_MODEL), think=DEFAULT_THINK, search=DEFAULT_SEARCH, code_theme=DEFAULT_CODE_THEME)
|
|
181
|
+
|
|
182
|
+
|
|
183
|
+
def load_config(path=None) -> dict:
|
|
184
|
+
path = Path(path or default_config_path())
|
|
185
|
+
cfg = _default_config()
|
|
186
|
+
path.parent.mkdir(parents=True, exist_ok=True)
|
|
187
|
+
if path.exists():
|
|
188
|
+
data = json.loads(path.read_text())
|
|
189
|
+
if not isinstance(data, dict): raise ValueError(f"Invalid config format in {path}")
|
|
190
|
+
cfg.update({k:v for k,v in data.items() if k in cfg})
|
|
191
|
+
else: path.write_text(json.dumps(cfg, indent=2) + "\n")
|
|
192
|
+
cfg["model"] = str(cfg["model"]).strip() or DEFAULT_MODEL
|
|
193
|
+
cfg["think"] = _validate_level("think", cfg["think"], DEFAULT_THINK)
|
|
194
|
+
cfg["search"] = _validate_level("search", cfg["search"], DEFAULT_SEARCH)
|
|
195
|
+
cfg["code_theme"] = str(cfg["code_theme"]).strip() or DEFAULT_CODE_THEME
|
|
196
|
+
return cfg
|
|
197
|
+
|
|
198
|
+
|
|
199
|
+
def load_sysp(path=None) -> str:
|
|
200
|
+
path = Path(path or default_sysp_path())
|
|
201
|
+
path.parent.mkdir(parents=True, exist_ok=True)
|
|
202
|
+
if not path.exists(): path.write_text(DEFAULT_SYSTEM_PROMPT)
|
|
203
|
+
return path.read_text()
|
|
204
|
+
|
|
205
|
+
|
|
206
|
+
@magics_class
|
|
207
|
+
class AIMagics(Magics):
|
|
208
|
+
def __init__(self, shell, ext):
|
|
209
|
+
super().__init__(shell)
|
|
210
|
+
self.ext = ext
|
|
211
|
+
|
|
212
|
+
@line_cell_magic("ipyai")
|
|
213
|
+
def ipyai(self, line: str="", cell: str | None=None):
|
|
214
|
+
if cell is None: return self.ext.handle_line(line)
|
|
215
|
+
return self.ext.run_prompt(cell)
|
|
216
|
+
|
|
217
|
+
|
|
218
|
+
@dataclass
|
|
219
|
+
class IPyAIExtension:
|
|
220
|
+
shell: object
|
|
221
|
+
model: str|None=None
|
|
222
|
+
think: str|None=None
|
|
223
|
+
search: str|None=None
|
|
224
|
+
code_theme: str|None=None
|
|
225
|
+
system_prompt: str|None=None
|
|
226
|
+
chat_cls: Callable[..., Chat]=Chat
|
|
227
|
+
formatter_cls: Callable[..., StreamFormatter]=StreamFormatter
|
|
228
|
+
magic_name: str=AI_MAGIC_NAME
|
|
229
|
+
config_path: Path|None=None
|
|
230
|
+
sysp_path: Path|None=None
|
|
231
|
+
loaded: bool=False
|
|
232
|
+
transformer: Callable = field(init=False)
|
|
233
|
+
|
|
234
|
+
def __post_init__(self):
|
|
235
|
+
self.config_path = Path(self.config_path or default_config_path())
|
|
236
|
+
self.sysp_path = Path(self.sysp_path or default_sysp_path())
|
|
237
|
+
cfg = load_config(self.config_path)
|
|
238
|
+
self.model = self.model or cfg["model"]
|
|
239
|
+
self.think = _validate_level("think", self.think if self.think is not None else cfg["think"], DEFAULT_THINK)
|
|
240
|
+
self.search = _validate_level("search", self.search if self.search is not None else cfg["search"], DEFAULT_SEARCH)
|
|
241
|
+
self.code_theme = str(self.code_theme or cfg["code_theme"]).strip() or DEFAULT_CODE_THEME
|
|
242
|
+
self.system_prompt = self.system_prompt if self.system_prompt is not None else load_sysp(self.sysp_path)
|
|
243
|
+
self.transformer = partial(transform_backticks, magic=self.magic_name)
|
|
244
|
+
|
|
245
|
+
@property
|
|
246
|
+
def history_manager(self): return getattr(self.shell, "history_manager", None)
|
|
247
|
+
|
|
248
|
+
@property
|
|
249
|
+
def session_number(self): return getattr(self.history_manager, "session_number", 0)
|
|
250
|
+
|
|
251
|
+
@property
|
|
252
|
+
def reset_line(self): return self.shell.user_ns.get(AI_RESET_LINE_NS, 0)
|
|
253
|
+
|
|
254
|
+
@property
|
|
255
|
+
def db(self):
|
|
256
|
+
hm = self.history_manager
|
|
257
|
+
return None if hm is None else hm.db
|
|
258
|
+
|
|
259
|
+
def ensure_prompt_table(self):
|
|
260
|
+
if self.db is None: return
|
|
261
|
+
with self.db:
|
|
262
|
+
self.db.execute(
|
|
263
|
+
f"""CREATE TABLE IF NOT EXISTS {AI_PROMPTS_TABLE} (
|
|
264
|
+
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
265
|
+
session INTEGER NOT NULL,
|
|
266
|
+
prompt TEXT NOT NULL,
|
|
267
|
+
response TEXT NOT NULL,
|
|
268
|
+
history_line INTEGER NOT NULL DEFAULT 0
|
|
269
|
+
)"""
|
|
270
|
+
)
|
|
271
|
+
cols = {o[1] for o in self.db.execute(f"PRAGMA table_info({AI_PROMPTS_TABLE})")}
|
|
272
|
+
if "history_line" not in cols:
|
|
273
|
+
self.db.execute(f"ALTER TABLE {AI_PROMPTS_TABLE} ADD COLUMN history_line INTEGER NOT NULL DEFAULT 0")
|
|
274
|
+
self.db.execute(f"CREATE INDEX IF NOT EXISTS idx_{AI_PROMPTS_TABLE}_session_id ON {AI_PROMPTS_TABLE} (session, id)")
|
|
275
|
+
|
|
276
|
+
def prompt_records(self, session: int | None=None) -> list:
|
|
277
|
+
if self.db is None: return []
|
|
278
|
+
self.ensure_prompt_table()
|
|
279
|
+
session = self.session_number if session is None else session
|
|
280
|
+
cur = self.db.execute(f"SELECT id, prompt, response, history_line FROM {AI_PROMPTS_TABLE} WHERE session=? ORDER BY id", (session,))
|
|
281
|
+
return cur.fetchall()
|
|
282
|
+
|
|
283
|
+
def prompt_rows(self, session: int | None=None) -> list: return [(p, r) for _,p,r,_ in self.prompt_records(session=session)]
|
|
284
|
+
|
|
285
|
+
def last_prompt_line(self, session: int | None=None) -> int:
|
|
286
|
+
rows = self.prompt_records(session=session)
|
|
287
|
+
return rows[-1][3] if rows else self.reset_line
|
|
288
|
+
|
|
289
|
+
def current_prompt_line(self) -> int:
|
|
290
|
+
c = getattr(self.shell, "execution_count", 1)
|
|
291
|
+
return max(c-1, 0)
|
|
292
|
+
|
|
293
|
+
def code_history(self, start: int, stop: int) -> list:
|
|
294
|
+
hm = self.history_manager
|
|
295
|
+
if hm is None or stop <= start: return []
|
|
296
|
+
return list(hm.get_range(session=0, start=start, stop=stop, raw=True, output=True))
|
|
297
|
+
|
|
298
|
+
def code_context(self, start: int, stop: int) -> str:
|
|
299
|
+
entries = self.code_history(start, stop)
|
|
300
|
+
parts = []
|
|
301
|
+
for _,line,pair in entries:
|
|
302
|
+
source,output = pair
|
|
303
|
+
if not source or _is_ipyai_input(source): continue
|
|
304
|
+
parts.append(_tag("code", _xml_text(source)))
|
|
305
|
+
if output is not None: parts.append(_tag("output", _xml_text(output)))
|
|
306
|
+
if not parts: return ""
|
|
307
|
+
return _tag("context", "".join(parts)) + "\n"
|
|
308
|
+
|
|
309
|
+
def format_prompt(self, prompt: str, start: int, stop: int) -> str:
|
|
310
|
+
ctx = self.code_context(start, stop)
|
|
311
|
+
return _prompt_template.format(context=ctx, prompt=prompt.strip())
|
|
312
|
+
|
|
313
|
+
def dialog_history(self, current_prompt_line: int) -> list:
|
|
314
|
+
hist,res = [],[]
|
|
315
|
+
prev_line = self.reset_line
|
|
316
|
+
for pid,prompt,response,history_line in self.prompt_records():
|
|
317
|
+
hist += [self.format_prompt(prompt, prev_line+1, history_line), response]
|
|
318
|
+
res.append(dict(id=pid, prompt=prompt, response=response, history_line=history_line))
|
|
319
|
+
prev_line = history_line
|
|
320
|
+
return hist,res
|
|
321
|
+
|
|
322
|
+
def resolve_tools(self, prompt: str, hist: list[dict]) -> list:
|
|
323
|
+
ns = self.shell.user_ns
|
|
324
|
+
names = _tool_refs(prompt, hist)
|
|
325
|
+
missing = [o for o in _tool_names(prompt) if o not in ns]
|
|
326
|
+
if missing: raise NameError(f"Missing tool(s) in user_ns: {', '.join(sorted(missing))}")
|
|
327
|
+
bad = [o for o in _tool_names(prompt) if o in ns and not callable(ns[o])]
|
|
328
|
+
if bad: raise TypeError(f"Non-callable tool(s): {', '.join(sorted(bad))}")
|
|
329
|
+
return [ns[o] for o in sorted(names) if o in ns and callable(ns[o])]
|
|
330
|
+
|
|
331
|
+
def save_prompt(self, prompt: str, response: str, history_line: int):
|
|
332
|
+
if self.db is None: return
|
|
333
|
+
self.ensure_prompt_table()
|
|
334
|
+
with self.db: self.db.execute(f"INSERT INTO {AI_PROMPTS_TABLE} (session, prompt, response, history_line) VALUES (?, ?, ?, ?)",
|
|
335
|
+
(self.session_number, prompt, response, history_line))
|
|
336
|
+
|
|
337
|
+
def reset_session_history(self) -> int:
|
|
338
|
+
if self.db is None: return 0
|
|
339
|
+
self.ensure_prompt_table()
|
|
340
|
+
with self.db: cur = self.db.execute(f"DELETE FROM {AI_PROMPTS_TABLE} WHERE session=?", (self.session_number,))
|
|
341
|
+
self.shell.user_ns.pop(AI_LAST_PROMPT, None)
|
|
342
|
+
self.shell.user_ns.pop(AI_LAST_RESPONSE, None)
|
|
343
|
+
self.shell.user_ns[AI_RESET_LINE_NS] = self.current_prompt_line()
|
|
344
|
+
return cur.rowcount or 0
|
|
345
|
+
|
|
346
|
+
def load(self):
|
|
347
|
+
if self.loaded: return self
|
|
348
|
+
self.ensure_prompt_table()
|
|
349
|
+
cts = self.shell.input_transformer_manager.cleanup_transforms
|
|
350
|
+
if self.transformer not in cts:
|
|
351
|
+
idx = 1 if cts and cts[0] is leading_empty_lines else 0
|
|
352
|
+
cts.insert(idx, self.transformer)
|
|
353
|
+
self.shell.register_magics(AIMagics(self.shell, self))
|
|
354
|
+
self.shell.user_ns[AI_EXTENSION_NS] = self
|
|
355
|
+
self.shell.user_ns.setdefault(AI_RESET_LINE_NS, 0)
|
|
356
|
+
setattr(self.shell, AI_EXTENSION_ATTR, self)
|
|
357
|
+
self.loaded = True
|
|
358
|
+
return self
|
|
359
|
+
|
|
360
|
+
def unload(self):
|
|
361
|
+
if not self.loaded: return self
|
|
362
|
+
cts = self.shell.input_transformer_manager.cleanup_transforms
|
|
363
|
+
if self.transformer in cts: cts.remove(self.transformer)
|
|
364
|
+
if self.shell.user_ns.get(AI_EXTENSION_NS) is self: self.shell.user_ns.pop(AI_EXTENSION_NS, None)
|
|
365
|
+
if getattr(self.shell, AI_EXTENSION_ATTR, None) is self: delattr(self.shell, AI_EXTENSION_ATTR)
|
|
366
|
+
self.loaded = False
|
|
367
|
+
return self
|
|
368
|
+
|
|
369
|
+
def handle_line(self, line: str):
|
|
370
|
+
line = line.strip()
|
|
371
|
+
if not line:
|
|
372
|
+
print(f"Model: {self.model}")
|
|
373
|
+
print(f"Think: {self.think}")
|
|
374
|
+
print(f"Search: {self.search}")
|
|
375
|
+
print(f"Code theme: {self.code_theme}")
|
|
376
|
+
print(f"Config: {self.config_path}")
|
|
377
|
+
print(f"System prompt: {self.sysp_path}")
|
|
378
|
+
return None
|
|
379
|
+
if line == "model":
|
|
380
|
+
print(f"Model: {self.model}")
|
|
381
|
+
return None
|
|
382
|
+
if line == "think":
|
|
383
|
+
print(f"Think: {self.think}")
|
|
384
|
+
return None
|
|
385
|
+
if line == "search":
|
|
386
|
+
print(f"Search: {self.search}")
|
|
387
|
+
return None
|
|
388
|
+
if line == "code_theme":
|
|
389
|
+
print(f"Code theme: {self.code_theme}")
|
|
390
|
+
return None
|
|
391
|
+
if line == "reset":
|
|
392
|
+
n = self.reset_session_history()
|
|
393
|
+
print(f"Deleted {n} AI prompts from session {self.session_number}.")
|
|
394
|
+
return None
|
|
395
|
+
if line.startswith("model "):
|
|
396
|
+
self.model = line.split(None, 1)[1].strip()
|
|
397
|
+
print(f"Model: {self.model}")
|
|
398
|
+
return None
|
|
399
|
+
if line.startswith("think "):
|
|
400
|
+
self.think = _validate_level("think", line.split(None, 1)[1], self.think)
|
|
401
|
+
print(f"Think: {self.think}")
|
|
402
|
+
return None
|
|
403
|
+
if line.startswith("search "):
|
|
404
|
+
self.search = _validate_level("search", line.split(None, 1)[1], self.search)
|
|
405
|
+
print(f"Search: {self.search}")
|
|
406
|
+
return None
|
|
407
|
+
if line.startswith("code_theme "):
|
|
408
|
+
self.code_theme = line.split(None, 1)[1].strip() or DEFAULT_CODE_THEME
|
|
409
|
+
print(f"Code theme: {self.code_theme}")
|
|
410
|
+
return None
|
|
411
|
+
return self.run_prompt(line)
|
|
412
|
+
|
|
413
|
+
def run_prompt(self, prompt: str):
|
|
414
|
+
prompt = (prompt or "").rstrip("\n")
|
|
415
|
+
if not prompt.strip(): return None
|
|
416
|
+
prompt_line = self.current_prompt_line()
|
|
417
|
+
hist,recs = self.dialog_history(prompt_line)
|
|
418
|
+
tools = self.resolve_tools(prompt, recs)
|
|
419
|
+
full_prompt = self.format_prompt(prompt, self.last_prompt_line()+1, prompt_line)
|
|
420
|
+
self.shell.user_ns[AI_LAST_PROMPT] = prompt
|
|
421
|
+
chat = self.chat_cls(model=self.model, sp=self.system_prompt, ns=self.shell.user_ns, hist=hist, tools=tools or None)
|
|
422
|
+
text = stream_to_stdout(chat(full_prompt, stream=True, think=self.think, search=self.search),
|
|
423
|
+
formatter_cls=self.formatter_cls, code_theme=self.code_theme)
|
|
424
|
+
self.shell.user_ns[AI_LAST_RESPONSE] = text
|
|
425
|
+
self.save_prompt(prompt, text, prompt_line)
|
|
426
|
+
return None
|
|
427
|
+
|
|
428
|
+
|
|
429
|
+
def create_extension(shell=None, **kwargs):
|
|
430
|
+
shell = shell or get_ipython()
|
|
431
|
+
if shell is None: raise RuntimeError("No active IPython shell found")
|
|
432
|
+
ext = getattr(shell, AI_EXTENSION_ATTR, None)
|
|
433
|
+
if ext is None: ext = IPyAIExtension(shell=shell, **kwargs)
|
|
434
|
+
if not ext.loaded: ext.load()
|
|
435
|
+
return ext
|
|
436
|
+
|
|
437
|
+
|
|
438
|
+
def load_ipython_extension(ipython): return create_extension(ipython)
|
|
439
|
+
|
|
440
|
+
|
|
441
|
+
def unload_ipython_extension(ipython):
|
|
442
|
+
ext = getattr(ipython, AI_EXTENSION_ATTR, None)
|
|
443
|
+
if ext is None: return
|
|
444
|
+
ext.unload()
|
|
@@ -0,0 +1,158 @@
|
|
|
1
|
+
Metadata-Version: 2.4
|
|
2
|
+
Name: ipyai
|
|
3
|
+
Version: 0.0.1
|
|
4
|
+
Summary: Minimal IPython backtick-to-AI extension
|
|
5
|
+
Author-email: Jeremy Howard <info@answer.ai>
|
|
6
|
+
License: Apache-2.0
|
|
7
|
+
Project-URL: Source, https://github.com/answerdotai/fastship
|
|
8
|
+
Requires-Python: >=3.10
|
|
9
|
+
Description-Content-Type: text/markdown
|
|
10
|
+
Requires-Dist: fastcore
|
|
11
|
+
Requires-Dist: ipython>=9
|
|
12
|
+
Requires-Dist: lisette>=0.0.47
|
|
13
|
+
Requires-Dist: rich
|
|
14
|
+
Provides-Extra: dev
|
|
15
|
+
Requires-Dist: build>=1.0.0; extra == "dev"
|
|
16
|
+
Requires-Dist: fastship>=0.0.6; extra == "dev"
|
|
17
|
+
Requires-Dist: pytest; extra == "dev"
|
|
18
|
+
Requires-Dist: twine>=5.0.0; extra == "dev"
|
|
19
|
+
|
|
20
|
+
# ipyai
|
|
21
|
+
|
|
22
|
+
`ipyai` is an IPython extension that turns any input starting with `` ` `` into an AI prompt.
|
|
23
|
+
|
|
24
|
+
It is aimed at terminal IPython, not notebook frontends. Prompts stream through `lisette`, final output is rendered with `rich`, and prompt history is stored alongside normal IPython history in the same SQLite database.
|
|
25
|
+
|
|
26
|
+
## Install
|
|
27
|
+
|
|
28
|
+
```bash
|
|
29
|
+
pip install ipyai
|
|
30
|
+
```
|
|
31
|
+
|
|
32
|
+
## Load
|
|
33
|
+
|
|
34
|
+
```python
|
|
35
|
+
%load_ext ipyai
|
|
36
|
+
```
|
|
37
|
+
|
|
38
|
+
If you change the package in a running shell:
|
|
39
|
+
|
|
40
|
+
```python
|
|
41
|
+
%reload_ext ipyai
|
|
42
|
+
```
|
|
43
|
+
|
|
44
|
+
## How To Auto-Load `ipyai`
|
|
45
|
+
|
|
46
|
+
`ipyai` is designed for terminal IPython. To auto-load it, add this to an `ipython_config.py` file used by terminal `ipython`:
|
|
47
|
+
|
|
48
|
+
```python
|
|
49
|
+
c.TerminalIPythonApp.extensions = ["ipyai"]
|
|
50
|
+
```
|
|
51
|
+
|
|
52
|
+
Good places for that file include:
|
|
53
|
+
|
|
54
|
+
- env-local: `{sys.prefix}/etc/ipython/ipython_config.py`
|
|
55
|
+
- user-local: `~/.ipython/profile_default/ipython_config.py`
|
|
56
|
+
- system-wide IPython config directories
|
|
57
|
+
|
|
58
|
+
In a virtualenv, the env-local path is usually:
|
|
59
|
+
|
|
60
|
+
- `.venv/etc/ipython/ipython_config.py`
|
|
61
|
+
|
|
62
|
+
To see which config paths your current `ipython` is searching, run:
|
|
63
|
+
|
|
64
|
+
```bash
|
|
65
|
+
ipython --debug -c 'exit()' 2>&1 | grep Searching
|
|
66
|
+
```
|
|
67
|
+
|
|
68
|
+
## Usage
|
|
69
|
+
|
|
70
|
+
Only the leading backtick is special. There is no closing delimiter.
|
|
71
|
+
|
|
72
|
+
Single line:
|
|
73
|
+
|
|
74
|
+
```python
|
|
75
|
+
`write a haiku about sqlite
|
|
76
|
+
```
|
|
77
|
+
|
|
78
|
+
Multiline paste:
|
|
79
|
+
|
|
80
|
+
```python
|
|
81
|
+
`summarize this module:
|
|
82
|
+
focus on state management
|
|
83
|
+
and persistence behavior
|
|
84
|
+
```
|
|
85
|
+
|
|
86
|
+
Backslash-Enter continuation in the terminal:
|
|
87
|
+
|
|
88
|
+
```python
|
|
89
|
+
`draft a migration plan \
|
|
90
|
+
with risks and rollback steps
|
|
91
|
+
```
|
|
92
|
+
|
|
93
|
+
`ipyai` also provides a line and cell magic named `%ipyai` / `%%ipyai`.
|
|
94
|
+
|
|
95
|
+
## `%ipyai` commands
|
|
96
|
+
|
|
97
|
+
```python
|
|
98
|
+
%ipyai
|
|
99
|
+
%ipyai model claude-sonnet-4-6
|
|
100
|
+
%ipyai think m
|
|
101
|
+
%ipyai search h
|
|
102
|
+
%ipyai code_theme monokai
|
|
103
|
+
%ipyai reset
|
|
104
|
+
```
|
|
105
|
+
|
|
106
|
+
Behavior:
|
|
107
|
+
|
|
108
|
+
- `%ipyai` prints the active model, think level, search level, code theme, config path, and system prompt path
|
|
109
|
+
- `%ipyai model ...`, `%ipyai think ...`, `%ipyai search ...`, `%ipyai code_theme ...` change the current session only
|
|
110
|
+
- `%ipyai reset` deletes AI prompt history for the current IPython session and resets the code-context baseline
|
|
111
|
+
|
|
112
|
+
## Tools
|
|
113
|
+
|
|
114
|
+
To expose a function from the active IPython namespace as a tool for the current conversation, reference it as `&\`name\`` in the prompt:
|
|
115
|
+
|
|
116
|
+
```python
|
|
117
|
+
def weather(city): return f"Sunny in {city}"
|
|
118
|
+
|
|
119
|
+
`use &`weather` to answer the question about Brisbane
|
|
120
|
+
```
|
|
121
|
+
|
|
122
|
+
## Output Rendering
|
|
123
|
+
|
|
124
|
+
Responses are streamed directly to the terminal during generation. After streaming completes:
|
|
125
|
+
|
|
126
|
+
- the stored response remains the original full `lisette` output
|
|
127
|
+
- the visible terminal output is re-rendered with `rich.Markdown`
|
|
128
|
+
- tool call detail blocks are compacted to a short single-line form such as `🔧 f(x=1) => 2`
|
|
129
|
+
|
|
130
|
+
## Configuration
|
|
131
|
+
|
|
132
|
+
On first load, `ipyai` creates two files under the XDG config directory:
|
|
133
|
+
|
|
134
|
+
- `~/.config/ipyai/config.json`
|
|
135
|
+
- `~/.config/ipyai/sysp.txt`
|
|
136
|
+
|
|
137
|
+
`config.json` currently supports:
|
|
138
|
+
|
|
139
|
+
```json
|
|
140
|
+
{
|
|
141
|
+
"model": "claude-sonnet-4-6",
|
|
142
|
+
"think": "l",
|
|
143
|
+
"search": "l",
|
|
144
|
+
"code_theme": "monokai"
|
|
145
|
+
}
|
|
146
|
+
```
|
|
147
|
+
|
|
148
|
+
Notes:
|
|
149
|
+
|
|
150
|
+
- `model` defaults from `IPYAI_MODEL` if that environment variable is set when the config file is first created
|
|
151
|
+
- `think` and `search` must be one of `l`, `m`, or `h`
|
|
152
|
+
- `code_theme` is passed to Rich for fenced and inline code styling
|
|
153
|
+
|
|
154
|
+
`sysp.txt` is used as the system prompt passed to `lisette.Chat`.
|
|
155
|
+
|
|
156
|
+
## Development
|
|
157
|
+
|
|
158
|
+
See [DEV.md](DEV.md) for project layout, architecture, persistence details, and development workflow.
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
ipyai
|
|
@@ -0,0 +1,30 @@
|
|
|
1
|
+
[build-system]
|
|
2
|
+
requires = ["setuptools>=64"]
|
|
3
|
+
build-backend = "setuptools.build_meta"
|
|
4
|
+
|
|
5
|
+
[project]
|
|
6
|
+
name = "ipyai"
|
|
7
|
+
dynamic = ["version"]
|
|
8
|
+
description = "Minimal IPython backtick-to-AI extension"
|
|
9
|
+
readme = "README.md"
|
|
10
|
+
requires-python = ">=3.10"
|
|
11
|
+
license = {text = "Apache-2.0"}
|
|
12
|
+
authors = [{name = "Jeremy Howard", email = "info@answer.ai"}]
|
|
13
|
+
dependencies = ["fastcore", "ipython>=9", "lisette>=0.0.47", "rich"]
|
|
14
|
+
|
|
15
|
+
[project.urls]
|
|
16
|
+
Source = "https://github.com/answerdotai/fastship"
|
|
17
|
+
|
|
18
|
+
[project.optional-dependencies]
|
|
19
|
+
dev = ["build>=1.0.0", "fastship>=0.0.6", "pytest", "twine>=5.0.0"]
|
|
20
|
+
|
|
21
|
+
[tool.setuptools.dynamic]
|
|
22
|
+
version = {attr = "ipyai.__version__"}
|
|
23
|
+
|
|
24
|
+
[tool.setuptools.packages.find]
|
|
25
|
+
include = ["ipyai", "ipyai.*"]
|
|
26
|
+
|
|
27
|
+
[tool.fastship]
|
|
28
|
+
branch = "main"
|
|
29
|
+
changelog_file = "CHANGELOG.md"
|
|
30
|
+
|
ipyai-0.0.1/setup.cfg
ADDED
|
@@ -0,0 +1,348 @@
|
|
|
1
|
+
from __future__ import annotations
|
|
2
|
+
|
|
3
|
+
import io,json,sqlite3
|
|
4
|
+
|
|
5
|
+
import pytest
|
|
6
|
+
from IPython.core.inputtransformer2 import TransformerManager
|
|
7
|
+
|
|
8
|
+
from ipyai.core import AI_EXTENSION_NS, AI_LAST_PROMPT, AI_LAST_RESPONSE, AI_RESET_LINE_NS
|
|
9
|
+
from ipyai.core import DEFAULT_CODE_THEME, DEFAULT_SEARCH, DEFAULT_SYSTEM_PROMPT, DEFAULT_THINK
|
|
10
|
+
from ipyai.core import IPyAIExtension, _clear_terminal_block, _display_line_count, compact_tool_display
|
|
11
|
+
from ipyai.core import prompt_from_lines, stream_to_stdout, transform_backticks
|
|
12
|
+
|
|
13
|
+
|
|
14
|
+
class DummyFormatter:
|
|
15
|
+
def format_stream(self, stream): yield from stream
|
|
16
|
+
|
|
17
|
+
|
|
18
|
+
class TTYStringIO(io.StringIO):
|
|
19
|
+
def isatty(self): return True
|
|
20
|
+
|
|
21
|
+
|
|
22
|
+
class DummyMarkdown:
|
|
23
|
+
def __init__(self, text, **kwargs): self.text,self.kwargs = text,kwargs
|
|
24
|
+
|
|
25
|
+
|
|
26
|
+
class DummyConsole:
|
|
27
|
+
instances = []
|
|
28
|
+
|
|
29
|
+
def __init__(self, **kwargs):
|
|
30
|
+
self.kwargs = kwargs
|
|
31
|
+
self.printed = []
|
|
32
|
+
type(self).instances.append(self)
|
|
33
|
+
|
|
34
|
+
def print(self, obj):
|
|
35
|
+
self.printed.append(obj)
|
|
36
|
+
self.kwargs["file"].write(f"RICH:{obj.text}")
|
|
37
|
+
|
|
38
|
+
|
|
39
|
+
class DummyChat:
|
|
40
|
+
instances = []
|
|
41
|
+
|
|
42
|
+
def __init__(self, **kwargs):
|
|
43
|
+
self.kwargs = kwargs
|
|
44
|
+
self.calls = []
|
|
45
|
+
type(self).instances.append(self)
|
|
46
|
+
|
|
47
|
+
def __call__(self, prompt, stream=False, **kwargs):
|
|
48
|
+
self.calls.append((prompt, stream, kwargs))
|
|
49
|
+
return ["first ", "second"]
|
|
50
|
+
|
|
51
|
+
|
|
52
|
+
class DummyHistory:
|
|
53
|
+
def __init__(self, session_number=1):
|
|
54
|
+
self.session_number = session_number
|
|
55
|
+
self.db = sqlite3.connect(":memory:")
|
|
56
|
+
self.entries = {}
|
|
57
|
+
|
|
58
|
+
def add(self, line, source, output=None): self.entries[line] = (source, output)
|
|
59
|
+
|
|
60
|
+
def get_range(self, session=0, start=1, stop=None, raw=True, output=False):
|
|
61
|
+
if stop is None: stop = max(self.entries, default=0) + 1
|
|
62
|
+
for i in range(start, stop):
|
|
63
|
+
if i not in self.entries: continue
|
|
64
|
+
src,out = self.entries[i]
|
|
65
|
+
yield (0, i, (src, out) if output else src)
|
|
66
|
+
|
|
67
|
+
|
|
68
|
+
class DummyInputTransformerManager:
|
|
69
|
+
def __init__(self): self.cleanup_transforms = []
|
|
70
|
+
|
|
71
|
+
|
|
72
|
+
class DummyShell:
|
|
73
|
+
def __init__(self):
|
|
74
|
+
self.input_transformer_manager = DummyInputTransformerManager()
|
|
75
|
+
self.user_ns = {}
|
|
76
|
+
self.magics = []
|
|
77
|
+
self.history_manager = DummyHistory()
|
|
78
|
+
self.execution_count = 2
|
|
79
|
+
|
|
80
|
+
def register_magics(self, magics): self.magics.append(magics)
|
|
81
|
+
|
|
82
|
+
|
|
83
|
+
@pytest.fixture(autouse=True)
|
|
84
|
+
def _xdg_config_home(monkeypatch, tmp_path): monkeypatch.setenv("XDG_CONFIG_HOME", str(tmp_path))
|
|
85
|
+
|
|
86
|
+
|
|
87
|
+
def test_prompt_from_lines_drops_continuation_backslashes():
|
|
88
|
+
lines = ["`plan this work\\\n", "with two lines\n"]
|
|
89
|
+
assert prompt_from_lines(lines) == "plan this work\nwith two lines\n"
|
|
90
|
+
|
|
91
|
+
|
|
92
|
+
def test_transform_backticks_executes_ai_magic_call():
|
|
93
|
+
seen = {}
|
|
94
|
+
|
|
95
|
+
class DummyIPython:
|
|
96
|
+
def run_cell_magic(self, magic, line, cell): seen.update(magic=magic, line=line, cell=cell)
|
|
97
|
+
|
|
98
|
+
code = "".join(transform_backticks(["`hello\n", "world\n"]))
|
|
99
|
+
exec(code, {"get_ipython": lambda: DummyIPython()})
|
|
100
|
+
assert seen == dict(magic="ipyai", line="", cell="hello\nworld\n")
|
|
101
|
+
|
|
102
|
+
|
|
103
|
+
def test_stream_to_stdout_collects_streamed_text():
|
|
104
|
+
out = io.StringIO()
|
|
105
|
+
text = stream_to_stdout(["a", "b"], formatter_cls=DummyFormatter, out=out)
|
|
106
|
+
assert text == "ab"
|
|
107
|
+
assert out.getvalue() == "ab\n"
|
|
108
|
+
|
|
109
|
+
|
|
110
|
+
def test_compact_tool_display_uses_summary_and_truncates_result():
|
|
111
|
+
text = """Before
|
|
112
|
+
|
|
113
|
+
<details class='tool-usage-details'>
|
|
114
|
+
<summary>f(x=1)</summary>
|
|
115
|
+
|
|
116
|
+
```json
|
|
117
|
+
{"id":"toolu_1","call":{"function":"f","arguments":{"x":"1"}},"result":"%s"}
|
|
118
|
+
```
|
|
119
|
+
|
|
120
|
+
</details>
|
|
121
|
+
|
|
122
|
+
After""" % ("a" * 120)
|
|
123
|
+
|
|
124
|
+
res = compact_tool_display(text)
|
|
125
|
+
assert "🔧 f(x=1) => " in res
|
|
126
|
+
assert "a" * 120 not in res
|
|
127
|
+
assert "..." in res
|
|
128
|
+
assert "\n\n\n🔧" not in res
|
|
129
|
+
assert "🔧 f(x=1)" in res
|
|
130
|
+
|
|
131
|
+
|
|
132
|
+
def test_stream_to_stdout_rewrites_tty_display_but_returns_full_text():
|
|
133
|
+
tool_block = """<details class='tool-usage-details'>
|
|
134
|
+
<summary>f(x=1)</summary>
|
|
135
|
+
|
|
136
|
+
```json
|
|
137
|
+
{"id":"toolu_1","call":{"function":"f","arguments":{"x":"1"}},"result":"2"}
|
|
138
|
+
```
|
|
139
|
+
|
|
140
|
+
</details>"""
|
|
141
|
+
out = TTYStringIO()
|
|
142
|
+
text = stream_to_stdout([tool_block], formatter_cls=DummyFormatter, out=out)
|
|
143
|
+
assert text == tool_block
|
|
144
|
+
assert "\x1b[" in out.getvalue()
|
|
145
|
+
assert "🔧 f(x=1) => 2" in out.getvalue()
|
|
146
|
+
assert "\n\n\n🔧" not in out.getvalue()
|
|
147
|
+
|
|
148
|
+
|
|
149
|
+
def test_stream_to_stdout_renders_final_markdown_with_rich_options():
|
|
150
|
+
DummyConsole.instances = []
|
|
151
|
+
out = TTYStringIO()
|
|
152
|
+
text = stream_to_stdout(["`x`"], formatter_cls=DummyFormatter, out=out, code_theme="github-dark", console_cls=DummyConsole, markdown_cls=DummyMarkdown)
|
|
153
|
+
|
|
154
|
+
assert text == "`x`"
|
|
155
|
+
md = DummyConsole.instances[-1].printed[-1]
|
|
156
|
+
assert md.text == "`x`"
|
|
157
|
+
assert md.kwargs == dict(code_theme="github-dark", inline_code_theme="github-dark", inline_code_lexer="python")
|
|
158
|
+
|
|
159
|
+
|
|
160
|
+
def test_display_line_count_accounts_for_wrapped_terminal_lines():
|
|
161
|
+
out = TTYStringIO()
|
|
162
|
+
assert _display_line_count(out, "x" * 200) > 1
|
|
163
|
+
|
|
164
|
+
|
|
165
|
+
def test_clear_terminal_block_does_not_move_above_input_line():
|
|
166
|
+
out = TTYStringIO()
|
|
167
|
+
_clear_terminal_block(out, "abc\n")
|
|
168
|
+
assert out.getvalue().startswith("\x1b[1F\x1b[J")
|
|
169
|
+
|
|
170
|
+
|
|
171
|
+
def test_extension_load_is_idempotent_and_tracks_last_response():
|
|
172
|
+
DummyChat.instances = []
|
|
173
|
+
shell = DummyShell()
|
|
174
|
+
ext = IPyAIExtension(shell=shell, chat_cls=DummyChat, formatter_cls=DummyFormatter).load()
|
|
175
|
+
ext.load()
|
|
176
|
+
assert shell.input_transformer_manager.cleanup_transforms == [ext.transformer]
|
|
177
|
+
assert len(shell.magics) == 1
|
|
178
|
+
assert shell.user_ns[AI_EXTENSION_NS] is ext
|
|
179
|
+
|
|
180
|
+
ext.run_prompt("tell me something")
|
|
181
|
+
|
|
182
|
+
assert DummyChat.instances[-1].kwargs["hist"] == []
|
|
183
|
+
assert DummyChat.instances[-1].calls == [(
|
|
184
|
+
"<user-request>tell me something</user-request>",
|
|
185
|
+
True,
|
|
186
|
+
dict(search=DEFAULT_SEARCH, think=DEFAULT_THINK),
|
|
187
|
+
)]
|
|
188
|
+
assert DummyChat.instances[-1].kwargs["model"] == ext.model
|
|
189
|
+
assert DummyChat.instances[-1].kwargs["sp"] == DEFAULT_SYSTEM_PROMPT
|
|
190
|
+
assert shell.user_ns[AI_LAST_PROMPT] == "tell me something"
|
|
191
|
+
assert shell.user_ns[AI_LAST_RESPONSE] == "first second"
|
|
192
|
+
assert ext.prompt_rows() == [("tell me something", "first second")]
|
|
193
|
+
|
|
194
|
+
|
|
195
|
+
def test_config_file_is_created_and_loaded(tmp_path):
|
|
196
|
+
cfg_path = tmp_path/"ipyai"/"config.json"
|
|
197
|
+
sysp_path = tmp_path/"ipyai"/"sysp.txt"
|
|
198
|
+
shell = DummyShell()
|
|
199
|
+
|
|
200
|
+
ext = IPyAIExtension(shell=shell, chat_cls=DummyChat, formatter_cls=DummyFormatter, config_path=cfg_path, sysp_path=sysp_path)
|
|
201
|
+
|
|
202
|
+
assert cfg_path.exists()
|
|
203
|
+
assert sysp_path.exists()
|
|
204
|
+
data = json.loads(cfg_path.read_text())
|
|
205
|
+
assert data["model"] == ext.model
|
|
206
|
+
assert data["think"] == DEFAULT_THINK
|
|
207
|
+
assert data["search"] == DEFAULT_SEARCH
|
|
208
|
+
assert data["code_theme"] == DEFAULT_CODE_THEME
|
|
209
|
+
assert sysp_path.read_text() == DEFAULT_SYSTEM_PROMPT
|
|
210
|
+
assert ext.system_prompt == DEFAULT_SYSTEM_PROMPT
|
|
211
|
+
|
|
212
|
+
|
|
213
|
+
def test_existing_sysp_file_is_loaded(tmp_path):
|
|
214
|
+
cfg_path = tmp_path/"ipyai"/"config.json"
|
|
215
|
+
sysp_path = tmp_path/"ipyai"/"sysp.txt"
|
|
216
|
+
sysp_path.parent.mkdir(parents=True, exist_ok=True)
|
|
217
|
+
sysp_path.write_text("custom sysp")
|
|
218
|
+
shell = DummyShell()
|
|
219
|
+
|
|
220
|
+
ext = IPyAIExtension(shell=shell, chat_cls=DummyChat, formatter_cls=DummyFormatter, config_path=cfg_path, sysp_path=sysp_path)
|
|
221
|
+
|
|
222
|
+
assert ext.system_prompt == "custom sysp"
|
|
223
|
+
|
|
224
|
+
|
|
225
|
+
def test_config_values_drive_model_think_and_search(tmp_path):
|
|
226
|
+
cfg_path = tmp_path/"ipyai"/"config.json"
|
|
227
|
+
cfg_path.parent.mkdir(parents=True, exist_ok=True)
|
|
228
|
+
cfg_path.write_text(json.dumps(dict(model="cfg-model", think="m", search="h")))
|
|
229
|
+
DummyChat.instances = []
|
|
230
|
+
shell = DummyShell()
|
|
231
|
+
ext = IPyAIExtension(shell=shell, chat_cls=DummyChat, formatter_cls=DummyFormatter, config_path=cfg_path).load()
|
|
232
|
+
|
|
233
|
+
ext.run_prompt("tell me something")
|
|
234
|
+
|
|
235
|
+
assert ext.model == "cfg-model"
|
|
236
|
+
assert ext.think == "m"
|
|
237
|
+
assert ext.search == "h"
|
|
238
|
+
assert DummyChat.instances[-1].calls == [(
|
|
239
|
+
"<user-request>tell me something</user-request>",
|
|
240
|
+
True,
|
|
241
|
+
dict(search="h", think="m"),
|
|
242
|
+
)]
|
|
243
|
+
|
|
244
|
+
|
|
245
|
+
def test_handle_line_can_report_and_set_model(capsys):
|
|
246
|
+
shell = DummyShell()
|
|
247
|
+
ext = IPyAIExtension(shell=shell, chat_cls=DummyChat, formatter_cls=DummyFormatter, model="old-model", think="m", search="h", code_theme="github-dark")
|
|
248
|
+
|
|
249
|
+
ext.handle_line("")
|
|
250
|
+
assert capsys.readouterr().out == (
|
|
251
|
+
f"Model: old-model\nThink: m\nSearch: h\nCode theme: github-dark\nConfig: {ext.config_path}\nSystem prompt: {ext.sysp_path}\n"
|
|
252
|
+
)
|
|
253
|
+
|
|
254
|
+
ext.handle_line("model new-model")
|
|
255
|
+
assert ext.model == "new-model"
|
|
256
|
+
assert capsys.readouterr().out == "Model: new-model\n"
|
|
257
|
+
|
|
258
|
+
ext.handle_line("think l")
|
|
259
|
+
assert ext.think == "l"
|
|
260
|
+
assert capsys.readouterr().out == "Think: l\n"
|
|
261
|
+
|
|
262
|
+
ext.handle_line("search m")
|
|
263
|
+
assert ext.search == "m"
|
|
264
|
+
assert capsys.readouterr().out == "Search: m\n"
|
|
265
|
+
|
|
266
|
+
ext.handle_line("code_theme ansi_dark")
|
|
267
|
+
assert ext.code_theme == "ansi_dark"
|
|
268
|
+
assert capsys.readouterr().out == "Code theme: ansi_dark\n"
|
|
269
|
+
|
|
270
|
+
|
|
271
|
+
def test_second_prompt_uses_sqlite_prompt_history():
|
|
272
|
+
DummyChat.instances = []
|
|
273
|
+
shell = DummyShell()
|
|
274
|
+
ext = IPyAIExtension(shell=shell, chat_cls=DummyChat, formatter_cls=DummyFormatter).load()
|
|
275
|
+
|
|
276
|
+
ext.run_prompt("first prompt")
|
|
277
|
+
shell.execution_count = 3
|
|
278
|
+
ext.run_prompt("second prompt")
|
|
279
|
+
|
|
280
|
+
assert DummyChat.instances[1].kwargs["hist"] == [
|
|
281
|
+
"<user-request>first prompt</user-request>",
|
|
282
|
+
"first second",
|
|
283
|
+
]
|
|
284
|
+
assert ext.prompt_rows() == [
|
|
285
|
+
("first prompt", "first second"),
|
|
286
|
+
("second prompt", "first second"),
|
|
287
|
+
]
|
|
288
|
+
|
|
289
|
+
|
|
290
|
+
def test_reset_only_deletes_current_session_history(capsys):
|
|
291
|
+
shell = DummyShell()
|
|
292
|
+
ext = IPyAIExtension(shell=shell, chat_cls=DummyChat, formatter_cls=DummyFormatter).load()
|
|
293
|
+
|
|
294
|
+
ext.save_prompt("s1 prompt", "s1 response", 1)
|
|
295
|
+
shell.history_manager.session_number = 2
|
|
296
|
+
shell.execution_count = 8
|
|
297
|
+
ext.save_prompt("s2 prompt", "s2 response", 7)
|
|
298
|
+
|
|
299
|
+
ext.handle_line("reset")
|
|
300
|
+
|
|
301
|
+
assert capsys.readouterr().out == "Deleted 1 AI prompts from session 2.\n"
|
|
302
|
+
assert ext.prompt_rows(session=1) == [("s1 prompt", "s1 response")]
|
|
303
|
+
assert ext.prompt_rows(session=2) == []
|
|
304
|
+
assert shell.user_ns[AI_RESET_LINE_NS] == 7
|
|
305
|
+
|
|
306
|
+
|
|
307
|
+
def test_tools_resolve_from_ampersand_backticks():
|
|
308
|
+
shell = DummyShell()
|
|
309
|
+
shell.user_ns["demo"] = lambda: "ok"
|
|
310
|
+
ext = IPyAIExtension(shell=shell, chat_cls=DummyChat, formatter_cls=DummyFormatter).load()
|
|
311
|
+
|
|
312
|
+
tools = ext.resolve_tools("please call &`demo` now", [])
|
|
313
|
+
assert len(tools) == 1
|
|
314
|
+
assert tools[0]() == "ok"
|
|
315
|
+
|
|
316
|
+
|
|
317
|
+
def test_context_xml_includes_code_and_outputs_since_last_prompt():
|
|
318
|
+
shell = DummyShell()
|
|
319
|
+
shell.history_manager.add(1, "a = 1")
|
|
320
|
+
shell.history_manager.add(2, "a", "1")
|
|
321
|
+
ext = IPyAIExtension(shell=shell, chat_cls=DummyChat, formatter_cls=DummyFormatter).load()
|
|
322
|
+
|
|
323
|
+
ctx = ext.code_context(1, 3)
|
|
324
|
+
assert "<context><code>a = 1</code><code>a</code><output>1</output></context>\n" == ctx
|
|
325
|
+
|
|
326
|
+
|
|
327
|
+
def test_history_context_uses_lines_since_last_prompt_only():
|
|
328
|
+
shell = DummyShell()
|
|
329
|
+
shell.history_manager.add(1, "before = 1")
|
|
330
|
+
shell.history_manager.add(2, "`first prompt")
|
|
331
|
+
shell.history_manager.add(3, "after = 2")
|
|
332
|
+
shell.execution_count = 3
|
|
333
|
+
ext = IPyAIExtension(shell=shell, chat_cls=DummyChat, formatter_cls=DummyFormatter).load()
|
|
334
|
+
ext.save_prompt("first prompt", "first response", 2)
|
|
335
|
+
|
|
336
|
+
prompt = ext.format_prompt("second prompt", ext.last_prompt_line()+1, 4)
|
|
337
|
+
assert "before = 1" not in prompt
|
|
338
|
+
assert "after = 2" in prompt
|
|
339
|
+
|
|
340
|
+
|
|
341
|
+
def test_cleanup_transform_prevents_help_syntax_interference():
|
|
342
|
+
tm = TransformerManager()
|
|
343
|
+
tm.cleanup_transforms.insert(1, transform_backticks)
|
|
344
|
+
|
|
345
|
+
code = tm.transform_cell("`I am testing my new AI prompt system.\\\nTell me do you see a newline in this prompt?")
|
|
346
|
+
assert code == "get_ipython().run_cell_magic('ipyai', '', 'I am testing my new AI prompt system.\\nTell me do you see a newline in this prompt?\\n')\n"
|
|
347
|
+
assert tm.check_complete("`I am testing my new AI prompt system.\\") == ("incomplete", 0)
|
|
348
|
+
assert tm.check_complete("`I am testing my new AI prompt system.\\\nTell me do you see a newline in this prompt?") == ("complete", None)
|