janito 2.20.1__py3-none-any.whl → 2.21.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
janito/README.md CHANGED
@@ -43,7 +43,12 @@ janito -p google -m gemini-2.0-flash-exp "Generate unit tests"
43
43
 
44
44
  ### Interactive Chat Mode
45
45
 
46
- Start an interactive session:
46
+ Start an interactive session (default mode):
47
+ ```bash
48
+ janito
49
+ ```
50
+
51
+ Or explicitly:
47
52
  ```bash
48
53
  janito --chat
49
54
  ```
@@ -80,17 +85,23 @@ janito --set model=kimi-k1-8k
80
85
 
81
86
  ### OpenAI
82
87
 
83
- - **Models**: gpt-4, gpt-4-turbo, gpt-3.5-turbo
88
+ - **Models**: gpt-5, gpt-4.1, gpt-4o, gpt-4-turbo, gpt-3.5-turbo
84
89
  - **Setup**: Get API key from [OpenAI Platform](https://platform.openai.com/)
85
90
 
86
91
  ### Anthropic
87
92
 
88
- - **Models**: claude-3-5-sonnet-20241022, claude-3-opus-20240229
93
+ - **Models**: claude-3-7-sonnet-20250219, claude-3-5-sonnet-20241022, claude-3-opus-20250514
89
94
  - **Setup**: Get API key from [Anthropic Console](https://console.anthropic.com/)
90
95
 
96
+ ### IBM WatsonX
97
+
98
+ - **Models**: ibm/granite-3-8b-instruct, ibm/granite-3-2b-instruct, meta-llama/llama-3-1-8b-instruct, meta-llama/llama-3-1-70b-instruct, mistralai/mistral-large
99
+ - **Strengths**: Enterprise-grade AI, IBM Granite models, hosted Llama and Mistral models
100
+ - **Setup**: Get API key and project ID from [IBM Cloud](https://cloud.ibm.com/)
101
+
91
102
  ### Google
92
103
 
93
- - **Models**: gemini-2.0-flash-exp, gemini-1.5-pro
104
+ - **Models**: gemini-2.5-flash, gemini-2.5-pro, gemini-2.5-flash-lite-preview-06-17
94
105
  - **Setup**: Get API key from [Google AI Studio](https://makersuite.google.com/)
95
106
 
96
107
  ## Advanced Features
@@ -114,12 +125,44 @@ janito --role python-expert "Optimize this algorithm"
114
125
 
115
126
  ### Environment Variables
116
127
  You can also configure via environment variables:
128
+
129
+ **MoonshotAI:**
117
130
  ```bash
118
131
  export MOONSHOTAI_API_KEY=your_key_here
119
132
  export JANITO_PROVIDER=moonshotai
120
133
  export JANITO_MODEL=kimi-k1-8k
121
134
  ```
122
135
 
136
+ **OpenAI:**
137
+ ```bash
138
+ export OPENAI_API_KEY=your_key_here
139
+ export JANITO_PROVIDER=openai
140
+ export JANITO_MODEL=gpt-5
141
+ ```
142
+
143
+ **IBM WatsonX:**
144
+ ```bash
145
+ export WATSONX_API_KEY=your_key_here
146
+ export WATSONX_PROJECT_ID=your_project_id
147
+ export WATSONX_SPACE_ID=your_space_id # optional
148
+ export JANITO_PROVIDER=ibm
149
+ export JANITO_MODEL=ibm/granite-3-8b-instruct
150
+ ```
151
+
152
+ **Anthropic:**
153
+ ```bash
154
+ export ANTHROPIC_API_KEY=your_key_here
155
+ export JANITO_PROVIDER=anthropic
156
+ export JANITO_MODEL=claude-3-7-sonnet-20250219
157
+ ```
158
+
159
+ **Google:**
160
+ ```bash
161
+ export GOOGLE_API_KEY=your_key_here
162
+ export JANITO_PROVIDER=google
163
+ export JANITO_MODEL=gemini-2.5-flash
164
+ ```
165
+
123
166
  ## Examples
124
167
 
125
168
  ### Code Generation
@@ -1,6 +1,6 @@
1
1
  # Getting Started with Janito
2
2
 
3
- This guide will help you set up Janito CLI quickly and start using it with MoonshotAI as your default provider.
3
+ This guide will help you set up Janito CLI quickly and start using it with your preferred AI provider.
4
4
 
5
5
  ## Quick Setup (2 minutes)
6
6
 
@@ -9,14 +9,29 @@ This guide will help you set up Janito CLI quickly and start using it with Moons
9
9
  pip install janito
10
10
  ```
11
11
 
12
- ### 2. Get Your MoonshotAI API Key
12
+ ### 2. Choose Your Provider
13
13
 
14
+ Janito supports multiple AI providers. Choose one to get started:
15
+
16
+ **MoonshotAI (Recommended for Chinese users)**
14
17
  1. Go to [Moonshot AI Platform](https://platform.moonshot.cn/)
15
18
  2. Sign up for an account
16
19
  3. Navigate to API Keys section
17
20
  4. Create a new API key
18
21
 
22
+ **OpenAI**
23
+ 1. Go to [OpenAI Platform](https://platform.openai.com/)
24
+ 2. Sign up and add payment method
25
+ 3. Create an API key
26
+
27
+ **IBM WatsonX**
28
+ 1. Go to [IBM Cloud](https://cloud.ibm.com/)
29
+ 2. Create a WatsonX AI service
30
+ 3. Get your API key and project ID
31
+
19
32
  ### 3. Configure Janito
33
+
34
+ **MoonshotAI Setup:**
20
35
  ```bash
21
36
  # Set MoonshotAI as your default provider
22
37
  janito --set-api-key YOUR_API_KEY -p moonshotai
@@ -25,6 +40,25 @@ janito --set-api-key YOUR_API_KEY -p moonshotai
25
40
  janito "Hello, can you introduce yourself?"
26
41
  ```
27
42
 
43
+ **OpenAI Setup:**
44
+ ```bash
45
+ # Set OpenAI as your default provider
46
+ janito --set-api-key YOUR_OPENAI_API_KEY -p openai
47
+
48
+ # Verify it's working
49
+ janito "Hello, can you introduce yourself?"
50
+ ```
51
+
52
+ **IBM WatsonX Setup:**
53
+ ```bash
54
+ # Set IBM WatsonX as your default provider
55
+ janito --set-api-key YOUR_WATSONX_API_KEY -p ibm
56
+ janito --set-config ibm project_id YOUR_PROJECT_ID
57
+
58
+ # Verify it's working
59
+ janito "Hello, can you introduce yourself?"
60
+ ```
61
+
28
62
  ## Your First Commands
29
63
 
30
64
  ### Basic Usage
@@ -52,29 +86,59 @@ janito -W ./my_project "Create a REST API with FastAPI"
52
86
 
53
87
  ### Set as Default Provider
54
88
  ```bash
55
- # Make MoonshotAI your permanent default
56
- janito --set provider=moonshotai
57
- janito --set model=kimi-k1-8k
89
+ # Make your chosen provider the permanent default
90
+ janito --set provider=moonshotai # or openai, ibm, etc.
91
+ janito --set model=kimi-k1-8k # or gpt-5, ibm/granite-3-8b-instruct, etc.
58
92
  ```
59
93
 
60
94
  ### Environment Variables
61
95
  You can also use environment variables:
96
+
97
+ **MoonshotAI:**
62
98
  ```bash
63
99
  export MOONSHOTAI_API_KEY=your_key_here
64
100
  export JANITO_PROVIDER=moonshotai
65
101
  export JANITO_MODEL=kimi-k1-8k
66
102
  ```
67
103
 
68
- ## MoonshotAI Models
104
+ **OpenAI:**
105
+ ```bash
106
+ export OPENAI_API_KEY=your_key_here
107
+ export JANITO_PROVIDER=openai
108
+ export JANITO_MODEL=gpt-5
109
+ ```
110
+
111
+ **IBM WatsonX:**
112
+ ```bash
113
+ export WATSONX_API_KEY=your_key_here
114
+ export WATSONX_PROJECT_ID=your_project_id
115
+ export WATSONX_SPACE_ID=your_space_id # optional
116
+ export JANITO_PROVIDER=ibm
117
+ export JANITO_MODEL=ibm/granite-3-3-8b-instruct
118
+ ```
69
119
 
70
- Janito supports these MoonshotAI models:
120
+ ## Available Models by Provider
71
121
 
122
+ ### MoonshotAI Models
72
123
  - **kimi-k1-8k**: Fast responses, good for general tasks
73
124
  - **kimi-k1-32k**: Better for longer contexts
74
125
  - **kimi-k1-128k**: Best for very long documents
75
126
  - **kimi-k2-turbo-preview**: Latest model with enhanced capabilities
76
127
  - **kimi-k2-turbo-preview**: Turbo version of the advanced reasoning model
77
128
 
129
+ ### OpenAI Models
130
+ - **gpt-5**: Latest GPT model with advanced capabilities
131
+ - **gpt-4.1**: High-performance model for complex tasks
132
+ - **gpt-4o**: Optimized for speed and cost
133
+ - **o3-mini**: Reasoning-focused model
134
+
135
+ ### IBM WatsonX Models
136
+ - **ibm/granite-3-3-8b-instruct**: IBM's latest Granite 3.3 8B Instruct model (default)
137
+ - **ibm/granite-3-8b-instruct**: IBM's Granite 3 8B Instruct model
138
+ - **meta-llama/llama-3-3-70b-instruct**: Meta Llama 3.3 70B hosted on WatsonX
139
+ - **meta-llama/llama-3-1-70b-instruct**: Meta Llama 3.1 70B hosted on WatsonX
140
+ - **mistralai/mistral-large-2407**: Latest Mistral Large model hosted on WatsonX
141
+
78
142
  ## Next Steps
79
143
 
80
144
  1. **Explore tools**: Run `janito --list-tools` to see available tools
@@ -91,14 +155,14 @@ Janito supports these MoonshotAI models:
91
155
  # Check available providers
92
156
  janito --list-providers
93
157
 
94
- # Re-register MoonshotAI
95
- janito --set-api-key YOUR_KEY -p moonshotai
158
+ # Re-register your provider
159
+ janito --set-api-key YOUR_KEY -p YOUR_PROVIDER
96
160
  ```
97
161
 
98
162
  **"Model not available" error**
99
163
  ```bash
100
- # List available MoonshotAI models
101
- janito -p moonshotai --list-models
164
+ # List available models for your provider
165
+ janito -p YOUR_PROVIDER --list-models
102
166
  ```
103
167
 
104
168
  **API key issues**
@@ -107,7 +171,16 @@ janito -p moonshotai --list-models
107
171
  janito --show-config
108
172
 
109
173
  # Reset API key
110
- janito --set-api-key NEW_KEY -p moonshotai
174
+ janito --set-api-key NEW_KEY -p YOUR_PROVIDER
175
+ ```
176
+
177
+ **IBM WatsonX specific issues**
178
+ ```bash
179
+ # Check if project ID is set
180
+ janito --show-config
181
+
182
+ # Set project ID if missing
183
+ janito --set-config ibm project_id YOUR_PROJECT_ID
111
184
  ```
112
185
 
113
186
  ### Getting Help
@@ -9,3 +9,4 @@ import janito.providers.alibaba.provider
9
9
  import janito.providers.zai.provider
10
10
  import janito.providers.cerebras.provider
11
11
  import janito.providers.mistral.provider
12
+ import janito.providers.ibm.provider
@@ -82,7 +82,7 @@ class AzureOpenAIProvider(LLMProvider):
82
82
  If the model_name is not in MODEL_SPECS, return a generic info dict.
83
83
  """
84
84
  if model_name is None:
85
- # Return all known specs, but note: only static ones are listed
85
+ # Return all known specs
86
86
  return {
87
87
  name: model_info.to_dict()
88
88
  for name, model_info in self.MODEL_SPECS.items()
@@ -0,0 +1,99 @@
1
+ # IBM WatsonX AI Provider
2
+
3
+ This provider enables access to IBM WatsonX AI services, including IBM's Granite models and other hosted models.
4
+
5
+ ## Setup
6
+
7
+ ### Prerequisites
8
+
9
+ 1. **IBM Cloud Account**: You need an IBM Cloud account with WatsonX AI service enabled.
10
+ 2. **API Key**: Generate an API key from your IBM Cloud dashboard.
11
+ 3. **Project ID**: Create a WatsonX project and get the project ID.
12
+
13
+ ### Authentication
14
+
15
+ Set up your credentials using the CLI:
16
+
17
+ ```bash
18
+ # Set the API key
19
+ janito --set-api-key YOUR_IBM_API_KEY -p ibm
20
+
21
+ # Set the project ID
22
+ janito --set-config ibm project_id YOUR_PROJECT_ID
23
+
24
+ # Optional: Set space ID if using WatsonX spaces
25
+ janito --set-config ibm space_id YOUR_SPACE_ID
26
+ ```
27
+
28
+ ### Environment Variables
29
+
30
+ Alternatively, you can set environment variables:
31
+
32
+ ```bash
33
+ export WATSONX_API_KEY="your-api-key"
34
+ export WATSONX_PROJECT_ID="your-project-id"
35
+ export WATSONX_SPACE_ID="your-space-id" # optional
36
+ ```
37
+
38
+ ## Available Models
39
+
40
+ The IBM provider supports the following models:
41
+
42
+ - **ibm/granite-3-8b-instruct**: IBM's Granite 3 8B Instruct model (default)
43
+ - **ibm/granite-3-2b-instruct**: IBM's Granite 3 2B Instruct model
44
+ - **meta-llama/llama-3-1-8b-instruct**: Meta Llama 3.1 8B hosted on WatsonX
45
+ - **meta-llama/llama-3-1-70b-instruct**: Meta Llama 3.1 70B hosted on WatsonX
46
+ - **mistralai/mistral-large**: Mistral Large model hosted on WatsonX
47
+
48
+ ## Usage
49
+
50
+ ### Command Line
51
+
52
+ ```bash
53
+ # Use IBM provider with default model
54
+ janito -p ibm "Explain quantum computing"
55
+
56
+ # Use specific IBM model
57
+ janito -p ibm -m ibm/granite-3-2b-instruct "Generate a Python function"
58
+
59
+ # Interactive chat mode
60
+ janito -p ibm --chat
61
+ ```
62
+
63
+ ### Configuration
64
+
65
+ You can set IBM as your default provider:
66
+
67
+ ```bash
68
+ janito --set-config provider ibm
69
+ ```
70
+
71
+ ## API Reference
72
+
73
+ The IBM provider uses IBM WatsonX's REST API with OpenAI-compatible format. The base URL is:
74
+
75
+ ```
76
+ https://us-south.ml.cloud.ibm.com
77
+ ```
78
+
79
+ ## Limitations
80
+
81
+ - **Rate Limits**: IBM WatsonX has rate limits based on your subscription tier
82
+ - **Context Window**: Models have different context window limits (typically 128K tokens)
83
+ - **Region Support**: Currently configured for US-South region
84
+
85
+ ## Troubleshooting
86
+
87
+ ### Common Issues
88
+
89
+ 1. **Authentication Error**: Ensure your API key and project ID are correct
90
+ 2. **Model Not Found**: Check if the model is available in your WatsonX project
91
+ 3. **Rate Limit Exceeded**: Wait and retry, or upgrade your subscription
92
+
93
+ ### Debug Mode
94
+
95
+ Enable debug logging to see API requests:
96
+
97
+ ```bash
98
+ janito -p ibm --verbose "Your prompt here"
99
+ ```
@@ -0,0 +1 @@
1
+ # IBM WatsonX AI Provider
@@ -0,0 +1,78 @@
1
+ """IBM WatsonX AI model specifications."""
2
+
3
+ from janito.llm.model import LLMModelInfo
4
+
5
+ MODEL_SPECS = {
6
+ "ibm/granite-3-8b-instruct": LLMModelInfo(
7
+ name="ibm/granite-3-8b-instruct",
8
+ context=128000,
9
+ max_input=128000,
10
+ max_response=4096,
11
+ max_cot=4096,
12
+ thinking_supported=False,
13
+ category="IBM WatsonX",
14
+ ),
15
+ "ibm/granite-3-3-8b-instruct": LLMModelInfo(
16
+ name="ibm/granite-3-3-8b-instruct",
17
+ context=128000,
18
+ max_input=128000,
19
+ max_response=4096,
20
+ max_cot=4096,
21
+ thinking_supported=False,
22
+ category="IBM WatsonX",
23
+ ),
24
+ "meta-llama/llama-3-1-70b-instruct": LLMModelInfo(
25
+ name="meta-llama/llama-3-1-70b-instruct",
26
+ context=128000,
27
+ max_input=128000,
28
+ max_response=4096,
29
+ max_cot=4096,
30
+ thinking_supported=False,
31
+ category="IBM WatsonX",
32
+ ),
33
+ "meta-llama/llama-3-3-70b-instruct": LLMModelInfo(
34
+ name="meta-llama/llama-3-3-70b-instruct",
35
+ context=128000,
36
+ max_input=128000,
37
+ max_response=4096,
38
+ max_cot=4096,
39
+ thinking_supported=False,
40
+ category="IBM WatsonX",
41
+ ),
42
+ "mistralai/mistral-large": LLMModelInfo(
43
+ name="mistralai/mistral-large",
44
+ context=128000,
45
+ max_input=128000,
46
+ max_response=4096,
47
+ max_cot=4096,
48
+ thinking_supported=False,
49
+ category="IBM WatsonX",
50
+ ),
51
+ "mistralai/mistral-large-2407": LLMModelInfo(
52
+ name="mistralai/mistral-large-2407",
53
+ context=128000,
54
+ max_input=128000,
55
+ max_response=4096,
56
+ max_cot=4096,
57
+ thinking_supported=False,
58
+ category="IBM WatsonX",
59
+ ),
60
+ "openai/gpt-oss-120b": LLMModelInfo(
61
+ name="openai/gpt-oss-120b",
62
+ context=128000,
63
+ max_input=128000,
64
+ max_response=4096,
65
+ max_cot=4096,
66
+ thinking_supported=True,
67
+ category="IBM WatsonX",
68
+ ),
69
+ "openai/gpt-oss-20b": LLMModelInfo(
70
+ name="openai/gpt-oss-20b",
71
+ context=128000,
72
+ max_input=128000,
73
+ max_response=4096,
74
+ max_cot=4096,
75
+ thinking_supported=True,
76
+ category="IBM WatsonX",
77
+ ),
78
+ }
@@ -0,0 +1,149 @@
1
+ """IBM WatsonX AI Provider implementation."""
2
+
3
+ from janito.llm.provider import LLMProvider
4
+ from janito.llm.auth import LLMAuthManager
5
+ from janito.llm.driver_config import LLMDriverConfig
6
+ from janito.tools import get_local_tools_adapter
7
+ from janito.providers.registry import LLMProviderRegistry
8
+ from .model_info import MODEL_SPECS
9
+
10
+ try:
11
+ from janito.drivers.openai.driver import OpenAIModelDriver
12
+
13
+ available = True
14
+ unavailable_reason = None
15
+ except ImportError as e:
16
+ available = False
17
+ unavailable_reason = str(e)
18
+
19
+
20
+ class IBMProvider(LLMProvider):
21
+ """IBM WatsonX AI Provider for accessing IBM's AI services."""
22
+
23
+ name = "ibm"
24
+ NAME = "ibm"
25
+ MAINTAINER = "João Pinto <janito@ikignosis.org>"
26
+ MODEL_SPECS = MODEL_SPECS
27
+ DEFAULT_MODEL = "ibm/granite-3-3-8b-instruct"
28
+
29
+ def __init__(
30
+ self, auth_manager: LLMAuthManager = None, config: LLMDriverConfig = None
31
+ ):
32
+ self._tools_adapter = get_local_tools_adapter()
33
+ self._driver = None
34
+
35
+ if not self.available:
36
+ return
37
+
38
+ self._initialize_config(auth_manager, config)
39
+ self._setup_model_config()
40
+
41
+ def _initialize_config(self, auth_manager, config):
42
+ """Initialize configuration and API credentials."""
43
+ self.auth_manager = auth_manager or LLMAuthManager()
44
+
45
+ # IBM WatsonX uses multiple credentials
46
+ self._api_key = self.auth_manager.get_credentials(type(self).NAME)
47
+ if not self._api_key:
48
+ from janito.llm.auth_utils import handle_missing_api_key
49
+
50
+ handle_missing_api_key(self.name, "WATSONX_API_KEY")
51
+
52
+ # Get project ID for WatsonX
53
+ self._project_id = self.auth_manager.get_credentials(
54
+ f"{type(self).NAME}_project_id"
55
+ )
56
+ if not self._project_id:
57
+ from janito.llm.auth_utils import handle_missing_api_key
58
+
59
+ handle_missing_api_key(self.name, "WATSONX_PROJECT_ID")
60
+
61
+ # Get region/space ID
62
+ self._space_id = self.auth_manager.get_credentials(
63
+ f"{type(self).NAME}_space_id"
64
+ )
65
+
66
+ self._driver_config = config or LLMDriverConfig(model=None)
67
+ if not self._driver_config.model:
68
+ self._driver_config.model = self.DEFAULT_MODEL
69
+ if not self._driver_config.api_key:
70
+ self._driver_config.api_key = self._api_key
71
+
72
+ def _setup_model_config(self):
73
+ """Configure token limits based on model specifications."""
74
+ model_name = self._driver_config.model
75
+ model_spec = self.MODEL_SPECS.get(model_name)
76
+
77
+ # Reset token parameters
78
+ if hasattr(self._driver_config, "max_tokens"):
79
+ self._driver_config.max_tokens = None
80
+ if hasattr(self._driver_config, "max_completion_tokens"):
81
+ self._driver_config.max_completion_tokens = None
82
+
83
+ if model_spec:
84
+ if getattr(model_spec, "thinking_supported", False):
85
+ max_cot = getattr(model_spec, "max_cot", None)
86
+ if max_cot and max_cot != "N/A":
87
+ self._driver_config.max_completion_tokens = int(max_cot)
88
+ else:
89
+ max_response = getattr(model_spec, "max_response", None)
90
+ if max_response and max_response != "N/A":
91
+ self._driver_config.max_tokens = int(max_response)
92
+
93
+ # Set IBM WatsonX specific parameters
94
+ self._driver_config.base_url = "https://us-south.ml.cloud.ibm.com"
95
+ self._driver_config.project_id = self._project_id
96
+ if self._space_id:
97
+ self._driver_config.space_id = self._space_id
98
+
99
+ self.fill_missing_device_info(self._driver_config)
100
+
101
+ @property
102
+ def driver(self):
103
+ if not self.available:
104
+ raise ImportError(f"IBMProvider unavailable: {self.unavailable_reason}")
105
+ return self._driver
106
+
107
+ @property
108
+ def available(self):
109
+ return available
110
+
111
+ @property
112
+ def unavailable_reason(self):
113
+ return unavailable_reason
114
+
115
+ def create_driver(self):
116
+ """
117
+ Creates and returns a new OpenAIModelDriver instance configured for IBM WatsonX.
118
+ IBM WatsonX uses OpenAI-compatible API format.
119
+ """
120
+ driver = OpenAIModelDriver(
121
+ tools_adapter=self._tools_adapter, provider_name=self.NAME
122
+ )
123
+ driver.config = self._driver_config
124
+ return driver
125
+
126
+ def create_agent(self, tools_adapter=None, agent_name: str = None, **kwargs):
127
+ from janito.llm.agent import LLMAgent
128
+
129
+ if tools_adapter is None:
130
+ tools_adapter = get_local_tools_adapter()
131
+ raise NotImplementedError(
132
+ "create_agent must be constructed via new factory using input/output queues and config."
133
+ )
134
+
135
+ @property
136
+ def model_name(self):
137
+ return self._driver_config.model
138
+
139
+ @property
140
+ def driver_config(self):
141
+ """Public, read-only access to the provider's LLMDriverConfig object."""
142
+ return self._driver_config
143
+
144
+ def execute_tool(self, tool_name: str, event_bus, *args, **kwargs):
145
+ self._tools_adapter.event_bus = event_bus
146
+ return self._tools_adapter.execute_by_name(tool_name, *args, **kwargs)
147
+
148
+
149
+ LLMProviderRegistry.register(IBMProvider.NAME, IBMProvider)
@@ -15,6 +15,9 @@ class FetchUrlTool(ToolBase):
15
15
  Args:
16
16
  url (str): The URL of the web page to fetch.
17
17
  search_strings (list[str], optional): Strings to search for in the page content.
18
+ max_length (int, optional): Maximum number of characters to return. Defaults to 5000.
19
+ max_lines (int, optional): Maximum number of lines to return. Defaults to 200.
20
+ context_chars (int, optional): Characters of context around search matches. Defaults to 400.
18
21
  Returns:
19
22
  str: Extracted text content from the web page, or a warning message. Example:
20
23
  - "<main text content...>"
@@ -25,14 +28,12 @@ class FetchUrlTool(ToolBase):
25
28
  permissions = ToolPermissions(read=True)
26
29
  tool_name = "fetch_url"
27
30
 
28
- def run(self, url: str, search_strings: list[str] = None) -> str:
29
- if not url.strip():
30
- self.report_warning(tr("ℹ️ Empty URL provided."), ReportAction.READ)
31
- return tr("Warning: Empty URL provided. Operation skipped.")
32
- self.report_action(tr("🌐 Fetch URL '{url}' ...", url=url), ReportAction.READ)
31
+ def _fetch_url_content(self, url: str) -> str:
32
+ """Fetch URL content and handle HTTP errors."""
33
33
  try:
34
34
  response = requests.get(url, timeout=10)
35
35
  response.raise_for_status()
36
+ return response.text
36
37
  except requests.exceptions.HTTPError as http_err:
37
38
  status_code = http_err.response.status_code if http_err.response else None
38
39
  if status_code and 400 <= status_code < 500:
@@ -71,27 +72,88 @@ class FetchUrlTool(ToolBase):
71
72
  return tr(
72
73
  "Warning: Error fetching URL: {url}: {err}", url=url, err=str(err)
73
74
  )
74
- soup = BeautifulSoup(response.text, "html.parser")
75
+
76
+ def _extract_and_clean_text(self, html_content: str) -> str:
77
+ """Extract and clean text from HTML content."""
78
+ soup = BeautifulSoup(html_content, "html.parser")
75
79
  text = soup.get_text(separator="\n")
80
+
81
+ # Clean up excessive whitespace
82
+ lines = [line.strip() for line in text.splitlines() if line.strip()]
83
+ return "\n".join(lines)
84
+
85
+ def _filter_by_search_strings(
86
+ self, text: str, search_strings: list[str], context_chars: int
87
+ ) -> str:
88
+ """Filter text by search strings with context."""
89
+ filtered = []
90
+ for s in search_strings:
91
+ idx = text.find(s)
92
+ if idx != -1:
93
+ start = max(0, idx - context_chars)
94
+ end = min(len(text), idx + len(s) + context_chars)
95
+ snippet = text[start:end]
96
+ filtered.append(snippet)
97
+
98
+ if filtered:
99
+ return "\n...\n".join(filtered)
100
+ else:
101
+ return tr("No lines found for the provided search strings.")
102
+
103
+ def _apply_limits(self, text: str, max_length: int, max_lines: int) -> str:
104
+ """Apply length and line limits to text."""
105
+ # Apply length limit
106
+ if len(text) > max_length:
107
+ text = text[:max_length] + "\n... (content truncated due to length limit)"
108
+
109
+ # Apply line limit
110
+ lines = text.splitlines()
111
+ if len(lines) > max_lines:
112
+ text = (
113
+ "\n".join(lines[:max_lines])
114
+ + "\n... (content truncated due to line limit)"
115
+ )
116
+
117
+ return text
118
+
119
+ def run(
120
+ self,
121
+ url: str,
122
+ search_strings: list[str] = None,
123
+ max_length: int = 5000,
124
+ max_lines: int = 200,
125
+ context_chars: int = 400,
126
+ ) -> str:
127
+ if not url.strip():
128
+ self.report_warning(tr("ℹ️ Empty URL provided."), ReportAction.READ)
129
+ return tr("Warning: Empty URL provided. Operation skipped.")
130
+
131
+ self.report_action(tr("🌐 Fetch URL '{url}' ...", url=url), ReportAction.READ)
132
+
133
+ # Fetch URL content
134
+ html_content = self._fetch_url_content(url)
135
+ if html_content.startswith("Warning:"):
136
+ return html_content
137
+
138
+ # Extract and clean text
139
+ text = self._extract_and_clean_text(html_content)
140
+
141
+ # Filter by search strings if provided
76
142
  if search_strings:
77
- filtered = []
78
- for s in search_strings:
79
- idx = text.find(s)
80
- if idx != -1:
81
- start = max(0, idx - 200)
82
- end = min(len(text), idx + len(s) + 200)
83
- snippet = text[start:end]
84
- filtered.append(snippet)
85
- if filtered:
86
- text = "\n...\n".join(filtered)
87
- else:
88
- text = tr("No lines found for the provided search strings.")
143
+ text = self._filter_by_search_strings(text, search_strings, context_chars)
144
+
145
+ # Apply limits
146
+ text = self._apply_limits(text, max_length, max_lines)
147
+
148
+ # Report success
89
149
  num_lines = len(text.splitlines())
150
+ total_chars = len(text)
90
151
  self.report_success(
91
152
  tr(
92
- "✅ {num_lines} {line_word}",
153
+ "✅ {num_lines} {line_word}, {chars} chars",
93
154
  num_lines=num_lines,
94
155
  line_word=pluralize("line", num_lines),
156
+ chars=total_chars,
95
157
  ),
96
158
  ReportAction.READ,
97
159
  )
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: janito
3
- Version: 2.20.1
3
+ Version: 2.21.0
4
4
  Summary: A new Python package called janito.
5
5
  Author-email: João Pinto <janito@ikignosis.org>
6
6
  Project-URL: Homepage, https://github.com/ikignosis/janito
@@ -1,4 +1,4 @@
1
- janito/README.md,sha256=uxjCeZVAsRh5lHLm-d5tWD2y42fr17tCrLsTvr-CR0Q,3557
1
+ janito/README.md,sha256=Zmw6QlgvSxoM7EMpFwZKZ7nwNLCylYwB_OQn8tFJARU,4719
2
2
  janito/__init__.py,sha256=a0pFui3A_AfWJiUfg93yE-Vf4868bqG3y9yg2fkTIuY,244
3
3
  janito/__main__.py,sha256=lPQ8kAyYfyeS1KopmJ8EVY5g1YswlIqCS615mM_B_rM,70
4
4
  janito/_version.py,sha256=PtAVr2K9fOS5sv6aXzmcb7UaR5NLGMFOofL7Ndjh75o,2344
@@ -94,7 +94,7 @@ janito/cli/core/setters.py,sha256=PD3aT1y1q8XWQVtRNfrU0dtlW4JGdn6BMJyP7FCQWhc,46
94
94
  janito/cli/core/unsetters.py,sha256=FEw9gCt0vRvoCt0kRSNfVB2tzi_TqppJIx2nHPP59-k,2012
95
95
  janito/cli/single_shot_mode/__init__.py,sha256=Ct99pKe9tINzVW6oedZJfzfZQKWpXz-weSSCn0hrwHY,115
96
96
  janito/cli/single_shot_mode/handler.py,sha256=uZ3iYy6TJqdujAB6B0GEb_mrWUz_JMXckzt5n7FGzgk,5405
97
- janito/docs/GETTING_STARTED.md,sha256=EbXV7B3XxjSy1E0XQJFOVITVbTmZBVB7pjth2Mb4_rg,2835
97
+ janito/docs/GETTING_STARTED.md,sha256=zbsOfHi3H4I5qM-cj9wp4QUkmqNcq28cJc0kZMAlgIU,4978
98
98
  janito/drivers/dashscope.bak.zip,sha256=9Pv4Xyciju8jO1lEMFVgYXexoZkxmDO3Ig6vw3ODfL8,4936
99
99
  janito/drivers/openai_responses.bak.zip,sha256=E43eDCHGa2tCtdjzj_pMnWDdnsOZzj8BJTR5tJp8wcM,13352
100
100
  janito/drivers/azure_openai/driver.py,sha256=L2rQOl1d0BHaDChHLtZszAeuWNoyYIgwuYuahE1qJps,4152
@@ -124,7 +124,7 @@ janito/llm/driver_input.py,sha256=Zq7IO4KdQPUraeIo6XoOaRy1IdQAyYY15RQw4JU30uA,38
124
124
  janito/llm/message_parts.py,sha256=QY_0kDjaxdoErDgKPRPv1dNkkYJuXIBmHWNLiOEKAH4,1365
125
125
  janito/llm/model.py,sha256=EioBkdgn8hJ0iQaKN-0KbXlsrk3YKmwR9IbvoEbdVTE,1159
126
126
  janito/llm/provider.py,sha256=3FbhQPrWBSEoIdIi-5DWIh0DD_CM570EFf1NcuGyGko,7961
127
- janito/providers/__init__.py,sha256=HWgChDOEWKjXavZXwMlKKtdghiy7pxs4KUx3HfhyyUM,492
127
+ janito/providers/__init__.py,sha256=SWXtbW3lU7ORi6d9Ro04qnGDDNJ2Cwq0hfbKdZeResg,530
128
128
  janito/providers/dashscope.bak.zip,sha256=BwXxRmZreEivvRtmqbr5BR62IFVlNjAf4y6DrF2BVJo,5998
129
129
  janito/providers/registry.py,sha256=Ygwv9eVrTXOKhv0EKxSWQXO5WMHvajWE2Q_Lc3p7dKo,730
130
130
  janito/providers/alibaba/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
@@ -133,7 +133,7 @@ janito/providers/alibaba/provider.py,sha256=L7oK_TeJn8p_ZaFlls5eb9YF03VrFkR_PLdo
133
133
  janito/providers/anthropic/model_info.py,sha256=m6pBh0Ia8_xC1KZ7ke_4HeHIFw7nWjnYVItnRpkCSWc,1206
134
134
  janito/providers/anthropic/provider.py,sha256=aGynBxCFc7oTyvGNDUkbutJCKurC_9J4AkReC2LTPYo,3023
135
135
  janito/providers/azure_openai/model_info.py,sha256=TMSqEpQROIIYUGAyulYJ5xGhj7CbLoaKL_JXeLbXaG0,689
136
- janito/providers/azure_openai/provider.py,sha256=3pUYcdHrIcK1N1Pmw96tJvFo4lVse2wfZhGS9D2N3dw,5404
136
+ janito/providers/azure_openai/provider.py,sha256=q7DB2J_kaLCTDeE9reS30K9TKY6vr38ZJggT3WPPyTs,5365
137
137
  janito/providers/cerebras/__init__.py,sha256=pnbDTkbIG9fyF8YNOZOqdUwOGKc5ehq9oMWt625Jnis,27
138
138
  janito/providers/cerebras/model_info.py,sha256=oxgssDXZCluCqkfZ5J1-gZqCVvVqCJIXsT0SFoONik4,2408
139
139
  janito/providers/cerebras/provider.py,sha256=ZyGE_wWXKyRJQCYTs_FWOvZYqVu7-xRAmV1h8YMJU5s,5661
@@ -143,6 +143,10 @@ janito/providers/deepseek/provider.py,sha256=eU-wwVqJog_oJ8VyQyohm6OMHlvrddSszqT
143
143
  janito/providers/google/__init__.py,sha256=hE3OGJvLEhvNLhIK_XmCGIdrIj8MKlyGgdOLJ4mdess,38
144
144
  janito/providers/google/model_info.py,sha256=AakTmzvWm1GPvFzGAq6-PeE_Dpq7BmAAqmh3L8N5KKo,1126
145
145
  janito/providers/google/provider.py,sha256=NQVG5kovHOc2SDgWjVIwYGMqshvMUAqRAk9iMntQ52k,3606
146
+ janito/providers/ibm/README.md,sha256=5RLzoS8n0i4yxDrg7qbubbIc5wkqycvhFMn_JRdBY00,2621
147
+ janito/providers/ibm/__init__.py,sha256=xwB2n-GP1oUsi8LuYKMgziclSSE2yJoFR9WImuUgqNw,26
148
+ janito/providers/ibm/model_info.py,sha256=GTErYUoM146cDxVy3dUSACrut6y8BI5C5jEdbcUDW_0,2281
149
+ janito/providers/ibm/provider.py,sha256=PvFseMD527XuBtg9SImIzwyohjnzvE7KgK-ix3Ly8K0,5362
146
150
  janito/providers/mistral/__init__.py,sha256=L8X53_KdP8xjmwdxwPmcDHY5iiVZm2xKE4xixsjibHk,27
147
151
  janito/providers/mistral/model_info.py,sha256=1Zlac_Bss3wuMELROXTVDfeSeWaI1quhhi0ISEE9NQU,2266
148
152
  janito/providers/mistral/provider.py,sha256=v7zIl2EZajLF8zVyLhqhmaCzeXhyanK1X9wuX-bcCgc,4773
@@ -185,7 +189,7 @@ janito/tools/adapters/local/copy_file.py,sha256=MSa0VtbIXFGz00q-nW2KdtzER0icm6Y1
185
189
  janito/tools/adapters/local/create_directory.py,sha256=cmSbNUsqsY8wZ2RsX-g2c9FZkkTIM5jIvFyKKqvZKxM,2565
186
190
  janito/tools/adapters/local/create_file.py,sha256=fDdLzKchyGMx6o2L6k-_KYxDofcktdrXcV7lKuiZMMo,3458
187
191
  janito/tools/adapters/local/delete_text_in_file.py,sha256=uEeedRxXAR7_CqUc_qhbEdM0OzRi_pgnP-iDjs2Zvjk,5087
188
- janito/tools/adapters/local/fetch_url.py,sha256=CLSgd56IDHtjOClQ2Frrp-cMI-fSt1Ngc1WK5oMOHrI,3869
192
+ janito/tools/adapters/local/fetch_url.py,sha256=8pk2zUq-_KZq0XPIDbH0z1JxL5Xnwl82JhO88q9exME,6034
189
193
  janito/tools/adapters/local/find_files.py,sha256=sRdvWZ58ximset-dcwtmDj1E32kruGC6kTGjTlSZtb0,6023
190
194
  janito/tools/adapters/local/move_file.py,sha256=PBVp_gcmNxOLJeJsAtENg40SUG-lP7ijWE4sOG72jDk,4620
191
195
  janito/tools/adapters/local/open_html_in_browser.py,sha256=T3h3XUPgyGdXbiO-Ei-R2lSnAhUqKn_erAKr4YxAq7c,1950
@@ -222,9 +226,9 @@ janito/tools/adapters/local/validate_file_syntax/ps1_validator.py,sha256=TeIkPt0
222
226
  janito/tools/adapters/local/validate_file_syntax/python_validator.py,sha256=BfCO_K18qy92m-2ZVvHsbEU5e11OPo1pO9Vz4G4616E,130
223
227
  janito/tools/adapters/local/validate_file_syntax/xml_validator.py,sha256=AijlsP_PgNuC8ZbGsC5vOTt3Jur76otQzkd_7qR0QFY,284
224
228
  janito/tools/adapters/local/validate_file_syntax/yaml_validator.py,sha256=TgyI0HRL6ug_gBcWEm5TGJJuA4E34ZXcIzMpAbv3oJs,155
225
- janito-2.20.1.dist-info/licenses/LICENSE,sha256=GSAKapQH5ZIGWlpQTA7v5YrfECyaxaohUb1vJX-qepw,1090
226
- janito-2.20.1.dist-info/METADATA,sha256=6HXr9gjIG_oQEDhfe6FPY56FIHhccnuWriVp9BmymBQ,16365
227
- janito-2.20.1.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
228
- janito-2.20.1.dist-info/entry_points.txt,sha256=wIo5zZxbmu4fC-ZMrsKD0T0vq7IqkOOLYhrqRGypkx4,48
229
- janito-2.20.1.dist-info/top_level.txt,sha256=m0NaVCq0-ivxbazE2-ND0EA9Hmuijj_OGkmCbnBcCig,7
230
- janito-2.20.1.dist-info/RECORD,,
229
+ janito-2.21.0.dist-info/licenses/LICENSE,sha256=GSAKapQH5ZIGWlpQTA7v5YrfECyaxaohUb1vJX-qepw,1090
230
+ janito-2.21.0.dist-info/METADATA,sha256=YCkSoe96woQWCrwi_bRPCY7vnPUOqvcPG6j98r6GTmc,16365
231
+ janito-2.21.0.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
232
+ janito-2.21.0.dist-info/entry_points.txt,sha256=wIo5zZxbmu4fC-ZMrsKD0T0vq7IqkOOLYhrqRGypkx4,48
233
+ janito-2.21.0.dist-info/top_level.txt,sha256=m0NaVCq0-ivxbazE2-ND0EA9Hmuijj_OGkmCbnBcCig,7
234
+ janito-2.21.0.dist-info/RECORD,,