@mariozechner/pi 0.1.3 → 0.1.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +3 -3
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -5,12 +5,12 @@ Quickly deploy LLMs on GPU pods from [Prime Intellect](https://www.primeintellec
5
5
  ## Installation
6
6
 
7
7
  ```bash
8
- npm install -g @badlogic/pi
8
+ npm install -g @mariozechner/pi
9
9
  ```
10
10
 
11
11
  Or run directly with npx:
12
12
  ```bash
13
- npx @badlogic/pi
13
+ npx @mariozechner/pi
14
14
  ```
15
15
 
16
16
  ## What This Is
@@ -314,4 +314,4 @@ Remember: Tool calling is still an evolving feature in the LLM ecosystem. What w
314
314
  - **Connection Refused**: Check pod is running and port is correct
315
315
  - **HF Token Issues**: Ensure HF_TOKEN is set before running setup
316
316
  - **Access Denied**: Some models (like Llama, Mistral) require completing an access request on HuggingFace first. Visit the model page and click "Request access"
317
- - **Tool Calling Errors**: See the Tool Calling section above - consider disabling it or using a different model
317
+ - **Tool Calling Errors**: See the Tool Calling section above - consider disabling it or using a different model
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@mariozechner/pi",
3
- "version": "0.1.3",
3
+ "version": "0.1.4",
4
4
  "description": "CLI tool for managing vLLM deployments on GPU pods from Prime Intellect, Vast.ai, etc.",
5
5
  "main": "pi.js",
6
6
  "bin": {