atlas-pipeline-mcp 1.0.11 β†’ 1.0.13

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +36 -31
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -1,47 +1,49 @@
1
1
 
2
- # πŸ—ΊοΈ Atlas MCP Server (`atlas-pipeline-mcp`)
2
+ # Atlas MCP Server (`atlas-pipeline-mcp`)
3
3
 
4
- **The Agentic AI Pipeline for your IDE.**
5
- Works with **Cursor**, **GitHub Copilot**, **Windsurf**, and **VS Code**.
4
+ **The Agentic AI Pipeline for your IDE.**
5
+ Works natively with **Cursor**, **GitHub Copilot**, **Windsurf**, and **VS Code**.
6
6
 
7
7
  Atlas is an MCP server that gives your IDE "Agentic Superpowers". Instead of just creating code, it enables a full analysis pipeline:
8
8
  **Intent β†’ Context β†’ Decomposition β†’ Variants β†’ Critique β†’ Optimization**.
9
9
 
10
10
  ---
11
11
 
12
- ## πŸš€ Key Features
13
- - **Zero Config**: No API keys required! Uses your IDE's built-in AI (Copilot/Cursor) to do the heavy lifting.
14
- - **Agentic Workflow**: Breaks down complex tasks into a DAG (Directed Acyclic Graph) of subtasks.
12
+ ## Key Features
13
+
14
+ - **Zero Config**: No API keys required. It uses your IDE's built-in AI (Copilot/Cursor) for analysis.
15
+ - **Agentic Workflow**: Breaks down specific tasks into a DAG (Directed Acyclic Graph) of subtasks.
15
16
  - **Optimization Loop**: Generates variants, critiques them, and produces a final optimized solution.
16
- - **Context Aware**: deeply analyzes your project structure and dependencies.
17
+ - **Context Aware**: Deeply analyzes project structure, file dependencies, and git history.
17
18
 
18
19
  ---
19
20
 
20
- ## πŸ“¦ Installation (1-Click Setup)
21
+ ## Installation (1-Click Setup)
22
+
23
+ ### 1. Install Globally
24
+ Open your terminal and run the following command to install the package globally via NPM:
25
+
26
+ ```bash
27
+ npm install -g atlas-pipeline-mcp
28
+ ```
21
29
 
22
- 1. **Install Globally**
23
- Open your terminal and run:
24
- ```bash
25
- npm install -g atlas-pipeline-mcp
26
- ```
30
+ ### 2. Run Auto-Setup
31
+ Run the setup command to automatically configure your IDE (works for Cursor and VS Code):
27
32
 
28
- 2. **Run Auto-Setup**
29
- This magic command automatically configures **Cursor** and **VS Code** for you:
30
- ```bash
31
- atlas-mcp-setup
32
- ```
33
- *(It detects your config file and injects the server settings properly.)*
33
+ ```bash
34
+ atlas-mcp-setup
35
+ ```
34
36
 
35
- 3. **Restart your IDE**
36
- Restart VS Code or Cursor. You will see a green light for "atlas".
37
+ ### 3. Restart IDE
38
+ Restart your editor. You should see the Atlas server connected in your MCP settings.
37
39
 
38
40
  ---
39
41
 
40
- ## πŸ› οΈ How to Use (Cheat Sheet)
42
+ ## How to Use (Cheat Sheet)
41
43
 
42
- Once installed, just talk to your AI Assistant (Copilot Chat / Cursor Chat) naturally. Any time you ask for these things, Atlas activates:
44
+ Once installed, simply chat with your AI Assistant (Copilot Chat or Cursor Chat). The server automatically activates based on your intent.
43
45
 
44
- | 🎯 Goal | πŸ—£οΈ What to Ask | πŸ€– Tool Used |
46
+ | Goal | What to Ask | Tool Used |
45
47
  | :--- | :--- | :--- |
46
48
  | **Fix a complex file** | "Run the **pipeline** on `utils.ts` to refactor it." | `atlas_pipeline` |
47
49
  | **Plan a feature** | "**Decompose** the task of adding JWT auth." | `atlas_decompose` |
@@ -52,10 +54,10 @@ Once installed, just talk to your AI Assistant (Copilot Chat / Cursor Chat) natu
52
54
 
53
55
  ---
54
56
 
55
- ## πŸ§ͺ Advanced Integration (Optional)
57
+ ## Advanced Integration (Optional)
56
58
 
57
- ### Using Local LLMs (Ollama)
58
- If you prefer running models locally (or outside your IDE's subscription), you can edit your settings to provide keys:
59
+ ### Using Local LLMs
60
+ If you prefer running models locally (e.g. Ollama) or want to use your own API keys instead of your IDE's subscription, you can manually configure the server in your settings:
59
61
 
60
62
  ```json
61
63
  "atlas": {
@@ -68,14 +70,17 @@ If you prefer running models locally (or outside your IDE's subscription), you c
68
70
  }
69
71
  }
70
72
  ```
71
- *Note: If no keys are provided, Atlas defaults to **Client Sampling mode**, asking your IDE (Copilot/Cursor) to generate the responses.*
73
+
74
+ *Note: If no keys are provided, Atlas defaults to **Client Sampling mode**, delegating generation to your IDE.*
72
75
 
73
76
  ---
74
77
 
75
- ## 🀝 Contributing
76
- Open source and ready for feedback!
78
+ ## Contributing
79
+
80
+ We welcome contributions to improve the Atlas pipeline.
77
81
  - **Repository**: [github.com/IamNishant51/atlas-mcp-server](https://github.com/IamNishant51/atlas-mcp-server)
78
82
  - **Issues**: Report bugs on GitHub.
79
83
 
80
84
  ---
81
- *Built with ❀️ by Antigravity*
85
+
86
+ *Built by Antigravity*
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "atlas-pipeline-mcp",
3
- "version": "1.0.11",
3
+ "version": "1.0.13",
4
4
  "mcpName": "io.github.IamNishant51/atlas-pipeline",
5
5
  "description": "Model Context Protocol server with multi-stage AI pipeline - works with any LLM provider (Ollama, OpenAI, Anthropic). Compatible with Cursor, GitHub Copilot, Windsurf, and more.",
6
6
  "main": "dist/server.js",