loclaude 0.0.3 → 0.0.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -7,6 +7,12 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
7
7
 
8
8
  ## [Unreleased]
9
9
 
10
+ ## [0.0.4] - 2025-01-22
11
+
12
+ ### Changed
13
+
14
+ - Updates README.md
15
+
10
16
  ## [0.0.3] - 2025-01-22
11
17
 
12
18
  ### Added
package/README.md CHANGED
@@ -292,12 +292,6 @@ Yes! Once you have models downloaded, you can run as many sessions as you want w
292
292
 
293
293
  No, but highly recommended. CPU-only mode works with smaller models at ~10-20 tokens/sec. A GPU (16GB+ VRAM) gives you 50-100 tokens/sec with larger, better models.
294
294
 
295
- ### What's the catch?
296
-
297
- - Initial setup takes 5-10 minutes
298
- - Model downloads are large (4-20GB)
299
- - GPU hardware investment if you don't have one (~$500-1500 used)
300
-
301
295
  ### Can I use this with the Claude API too?
302
296
 
303
297
  Absolutely! Keep using Claude API for critical tasks, use loclaude for everything else to save money and avoid limits.
@@ -356,32 +350,6 @@ If inference is slow on CPU:
356
350
  2. Expect ~10-20 tokens/sec on modern CPUs
357
351
  3. Consider cloud models via Ollama: `glm-4.7:cloud`
358
352
 
359
- ## Contributing
360
-
361
- loclaude is open source and welcomes contributions! Here's how you can help:
362
-
363
- ### Share Your Experience
364
-
365
- - Star the repo if loclaude saves you money or rate limits
366
- - Share your setup and model recommendations
367
- - Write about your experience on dev.to, Twitter, or your blog
368
- - Report bugs and request features via GitHub Issues
369
-
370
- ### Code Contributions
371
-
372
- - Fix bugs or add features (see open issues)
373
- - Improve documentation or examples
374
- - Add support for new model providers
375
- - Optimize model loading and performance
376
-
377
- ### Spread the Word
378
-
379
- - Post on r/LocalLLaMA, r/selfhosted, r/ClaudeAI
380
- - Share in Discord/Slack dev communities
381
- - Help others troubleshoot in GitHub Discussions
382
-
383
- Every star, issue report, and shared experience helps more developers discover unlimited local Claude Code.
384
-
385
353
  ## Getting Help
386
354
 
387
355
  - **Issues/Bugs**: [GitHub Issues](https://github.com/nicholasgalante1997/loclaude/issues)
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "loclaude",
3
- "version": "0.0.3",
3
+ "version": "0.0.4",
4
4
  "description": "Claude Code with local Ollama LLMs - Zero API costs, no rate limits, complete privacy",
5
5
  "type": "module",
6
6
  "license": "./LICENSE",
@@ -76,7 +76,7 @@
76
76
  "release:rc": "npm publish --tag rc --access public",
77
77
  "release:alpha": "npm publish --tag alpha --access public",
78
78
  "release:beta": "npm publish --tag beta --access public",
79
- "postrelease": "./scripts/tag.sh $(jq -r .version package.json)"
79
+ "postrelease": "export LOCLAUDE_RELEASE_VERSION=$(jq -r .version package.json) ./scripts/commit.sh $LOCLAUDE_RELEASE_VERSION && ./scripts/tag.sh $LOCLAUDE_RELEASE_VERSION"
80
80
  },
81
81
  "dependencies": {
82
82
  "@loclaude-internal/cli": "^0.0.3"