kopipasta 0.27.0__tar.gz → 0.29.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of kopipasta might be problematic. Click here for more details.

@@ -0,0 +1,111 @@
1
+ Metadata-Version: 2.1
2
+ Name: kopipasta
3
+ Version: 0.29.0
4
+ Summary: A CLI tool to generate prompts with project structure and file contents
5
+ Home-page: https://github.com/mkorpela/kopipasta
6
+ Author: Mikko Korpela
7
+ Author-email: mikko.korpela@gmail.com
8
+ License: MIT
9
+ Classifier: Development Status :: 3 - Alpha
10
+ Classifier: Intended Audience :: Developers
11
+ Classifier: License :: OSI Approved :: MIT License
12
+ Classifier: Operating System :: OS Independent
13
+ Classifier: Programming Language :: Python :: 3
14
+ Classifier: Programming Language :: Python :: 3.8
15
+ Classifier: Programming Language :: Python :: 3.9
16
+ Classifier: Programming Language :: Python :: 3.10
17
+ Classifier: Programming Language :: Python :: 3.11
18
+ Classifier: Programming Language :: Python :: 3.12
19
+ Requires-Python: >=3.8
20
+ Description-Content-Type: text/markdown
21
+ License-File: LICENSE
22
+ Requires-Dist: pyperclip==1.9.0
23
+ Requires-Dist: requests==2.32.3
24
+ Requires-Dist: Pygments==2.18.0
25
+ Requires-Dist: rich==13.8.1
26
+ Requires-Dist: click==8.2.1
27
+
28
+ # kopipasta
29
+
30
+ [![Version](https://img.shields.io/pypi/v/kopipasta.svg)](https://pypi.python.org/pypi/kopipasta)
31
+ [![Downloads](http://pepy.tech/badge/kopipasta)](http://pepy.tech/project/kopipasta)
32
+
33
+ A CLI tool for taking **full, transparent control** of your LLM context. No black boxes.
34
+
35
+ <img src="kopipasta.jpg" alt="kopipasta" width="300">
36
+
37
+ - An LLM told me that "kopi" means Coffee in some languages... and a Diffusion model then made this delicious soup.
38
+
39
+ ## The Philosophy: You Control the Context
40
+
41
+ Many AI coding assistants use Retrieval-Augmented Generation (RAG) to automatically find what *they think* is relevant context. This is a black box. When the LLM gives a bad answer, you can't debug it because you don't know what context it was actually given.
42
+
43
+ **`kopipasta` is the opposite.** I built it for myself on the principle of **explicit context control**. You are in the driver's seat. You decide *exactly* what files, functions, and snippets go into the prompt. This transparency is the key to getting reliable, debuggable results from an LLM.
44
+
45
+ It's a "smart copy" command for your project, not a magic wand.
46
+
47
+ ## How It Works
48
+
49
+ The workflow is dead simple:
50
+
51
+ 1. **Gather:** Run `kopipasta` and point it at the files, directories, and URLs that matter for your task.
52
+ 2. **Select:** The tool interactively helps you choose what to include. For large files, you can send just a snippet or even hand-pick individual functions.
53
+ 3. **Define:** Your default editor (`$EDITOR`) opens for you to write your instructions to the LLM.
54
+ 4. **Paste:** The final, comprehensive prompt is now on your clipboard, ready to be pasted into ChatGPT, Gemini, Claude, or your LLM of choice.
55
+
56
+ ## Installation
57
+
58
+ ```bash
59
+ # Using pipx (recommended for CLI tools)
60
+ pipx install kopipasta
61
+
62
+ # Or using standard pip
63
+ pip install kopipasta
64
+ ```
65
+
66
+ ## Usage
67
+
68
+ ```bash
69
+ kopipasta [options] [files_or_directories_or_urls...]
70
+ ```
71
+
72
+ **Arguments:**
73
+
74
+ * `[files_or_directories_or_urls...]`: One or more paths to files, directories, or web URLs to use as the starting point for your context.
75
+
76
+ **Options:**
77
+
78
+ * `-t TASK`, `--task TASK`: Provide the task description directly on the command line, skipping the editor.
79
+
80
+ ## Key Features
81
+
82
+ * **Total Context Control:** Interactively select files, directories, snippets, or even individual functions. You see everything that goes into the prompt.
83
+ * **Transparent & Explicit:** No hidden RAG. You know exactly what's in the prompt because you built it. This makes debugging LLM failures possible.
84
+ * **Web-Aware:** Pulls in content directly from URLs—perfect for API documentation.
85
+ * **Safety First:**
86
+ * Automatically respects your `.gitignore` rules.
87
+ * Detects if you're about to include secrets from a `.env` file and asks what to do.
88
+ * **Context-Aware:** Keeps a running total of the prompt size (in characters and estimated tokens) so you don't overload the LLM's context window.
89
+ * **Developer-Friendly:**
90
+ * Uses your familiar `$EDITOR` for writing task descriptions.
91
+ * Copies the final prompt directly to your clipboard.
92
+ * Provides syntax highlighting during chunk selection.
93
+
94
+ ## A Real-World Example
95
+
96
+ I had a bug where my `setup.py` didn't include all the dependencies from `requirements.txt`.
97
+
98
+ 1. I ran `kopipasta -t "Update setup.py to read dependencies dynamically from requirements.txt" setup.py requirements.txt`.
99
+ 2. The tool confirmed the inclusion of both files and copied the complete prompt to my clipboard.
100
+ 3. I pasted the prompt into my LLM chat window.
101
+ 4. I copied the LLM's suggested code back into my local `setup.py`.
102
+ 5. I tested the changes and committed.
103
+
104
+ No manual file reading, no clumsy copy-pasting, just a clean, context-rich prompt that I had full control over.
105
+
106
+ ## Configuration
107
+
108
+ Set your preferred command-line editor via the `EDITOR` environment variable.
109
+ ```bash
110
+ export EDITOR=nvim # or vim, nano, code --wait, etc.
111
+ ```
@@ -0,0 +1,84 @@
1
+ # kopipasta
2
+
3
+ [![Version](https://img.shields.io/pypi/v/kopipasta.svg)](https://pypi.python.org/pypi/kopipasta)
4
+ [![Downloads](http://pepy.tech/badge/kopipasta)](http://pepy.tech/project/kopipasta)
5
+
6
+ A CLI tool for taking **full, transparent control** of your LLM context. No black boxes.
7
+
8
+ <img src="kopipasta.jpg" alt="kopipasta" width="300">
9
+
10
+ - An LLM told me that "kopi" means Coffee in some languages... and a Diffusion model then made this delicious soup.
11
+
12
+ ## The Philosophy: You Control the Context
13
+
14
+ Many AI coding assistants use Retrieval-Augmented Generation (RAG) to automatically find what *they think* is relevant context. This is a black box. When the LLM gives a bad answer, you can't debug it because you don't know what context it was actually given.
15
+
16
+ **`kopipasta` is the opposite.** I built it for myself on the principle of **explicit context control**. You are in the driver's seat. You decide *exactly* what files, functions, and snippets go into the prompt. This transparency is the key to getting reliable, debuggable results from an LLM.
17
+
18
+ It's a "smart copy" command for your project, not a magic wand.
19
+
20
+ ## How It Works
21
+
22
+ The workflow is dead simple:
23
+
24
+ 1. **Gather:** Run `kopipasta` and point it at the files, directories, and URLs that matter for your task.
25
+ 2. **Select:** The tool interactively helps you choose what to include. For large files, you can send just a snippet or even hand-pick individual functions.
26
+ 3. **Define:** Your default editor (`$EDITOR`) opens for you to write your instructions to the LLM.
27
+ 4. **Paste:** The final, comprehensive prompt is now on your clipboard, ready to be pasted into ChatGPT, Gemini, Claude, or your LLM of choice.
28
+
29
+ ## Installation
30
+
31
+ ```bash
32
+ # Using pipx (recommended for CLI tools)
33
+ pipx install kopipasta
34
+
35
+ # Or using standard pip
36
+ pip install kopipasta
37
+ ```
38
+
39
+ ## Usage
40
+
41
+ ```bash
42
+ kopipasta [options] [files_or_directories_or_urls...]
43
+ ```
44
+
45
+ **Arguments:**
46
+
47
+ * `[files_or_directories_or_urls...]`: One or more paths to files, directories, or web URLs to use as the starting point for your context.
48
+
49
+ **Options:**
50
+
51
+ * `-t TASK`, `--task TASK`: Provide the task description directly on the command line, skipping the editor.
52
+
53
+ ## Key Features
54
+
55
+ * **Total Context Control:** Interactively select files, directories, snippets, or even individual functions. You see everything that goes into the prompt.
56
+ * **Transparent & Explicit:** No hidden RAG. You know exactly what's in the prompt because you built it. This makes debugging LLM failures possible.
57
+ * **Web-Aware:** Pulls in content directly from URLs—perfect for API documentation.
58
+ * **Safety First:**
59
+ * Automatically respects your `.gitignore` rules.
60
+ * Detects if you're about to include secrets from a `.env` file and asks what to do.
61
+ * **Context-Aware:** Keeps a running total of the prompt size (in characters and estimated tokens) so you don't overload the LLM's context window.
62
+ * **Developer-Friendly:**
63
+ * Uses your familiar `$EDITOR` for writing task descriptions.
64
+ * Copies the final prompt directly to your clipboard.
65
+ * Provides syntax highlighting during chunk selection.
66
+
67
+ ## A Real-World Example
68
+
69
+ I had a bug where my `setup.py` didn't include all the dependencies from `requirements.txt`.
70
+
71
+ 1. I ran `kopipasta -t "Update setup.py to read dependencies dynamically from requirements.txt" setup.py requirements.txt`.
72
+ 2. The tool confirmed the inclusion of both files and copied the complete prompt to my clipboard.
73
+ 3. I pasted the prompt into my LLM chat window.
74
+ 4. I copied the LLM's suggested code back into my local `setup.py`.
75
+ 5. I tested the changes and committed.
76
+
77
+ No manual file reading, no clumsy copy-pasting, just a clean, context-rich prompt that I had full control over.
78
+
79
+ ## Configuration
80
+
81
+ Set your preferred command-line editor via the `EDITOR` environment variable.
82
+ ```bash
83
+ export EDITOR=nvim # or vim, nano, code --wait, etc.
84
+ ```
@@ -0,0 +1,47 @@
1
+ import fnmatch
2
+ import os
3
+ from typing import List, Optional, Tuple
4
+
5
+
6
+ FileTuple = Tuple[str, bool, Optional[List[str]], str]
7
+
8
+
9
+ def read_file_contents(file_path):
10
+ try:
11
+ with open(file_path, 'r') as file:
12
+ return file.read()
13
+ except Exception as e:
14
+ print(f"Error reading {file_path}: {e}")
15
+ return ""
16
+
17
+
18
+ def is_ignored(path, ignore_patterns):
19
+ path = os.path.normpath(path)
20
+ for pattern in ignore_patterns:
21
+ if fnmatch.fnmatch(os.path.basename(path), pattern) or fnmatch.fnmatch(path, pattern):
22
+ return True
23
+ return False
24
+
25
+
26
+ def is_binary(file_path):
27
+ try:
28
+ with open(file_path, 'rb') as file:
29
+ chunk = file.read(1024)
30
+ if b'\0' in chunk: # null bytes indicate binary file
31
+ return True
32
+ if file_path.lower().endswith(('.json', '.csv')):
33
+ return False
34
+ return False
35
+ except IOError:
36
+ return False
37
+
38
+
39
+ def get_human_readable_size(size):
40
+ for unit in ['B', 'KB', 'MB', 'GB', 'TB']:
41
+ if size < 1024.0:
42
+ return f"{size:.2f} {unit}"
43
+ size /= 1024.0
44
+
45
+
46
+ def is_large_file(file_path, threshold=102400): # 100 KB threshold
47
+ return os.path.getsize(file_path) > threshold