langchain-githubcopilot-chat 0.2.0__tar.gz → 0.4.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 YIhan Wu
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
@@ -0,0 +1,137 @@
1
+ Metadata-Version: 2.1
2
+ Name: langchain-githubcopilot-chat
3
+ Version: 0.4.0
4
+ Summary: An integration package connecting GithubcopilotChat and LangChain
5
+ Home-page: https://github.com/langchain-ai/langchain
6
+ License: MIT
7
+ Author: YIhan Wu
8
+ Author-email: iumm@ibat.ac.cn
9
+ Requires-Python: >=3.10,<4.0
10
+ Classifier: License :: OSI Approved :: MIT License
11
+ Classifier: Programming Language :: Python :: 3
12
+ Classifier: Programming Language :: Python :: 3.10
13
+ Classifier: Programming Language :: Python :: 3.11
14
+ Classifier: Programming Language :: Python :: 3.12
15
+ Classifier: Programming Language :: Python :: 3.13
16
+ Requires-Dist: httpx (>=0.28.1)
17
+ Requires-Dist: langchain-core (>=1.1.0,<2.0.0)
18
+ Project-URL: Repository, https://github.com/langchain-ai/langchain
19
+ Project-URL: Release Notes, https://github.com/langchain-ai/langchain/releases?q=tag%3A%22githubcopilot-chat%3D%3D0%22&expanded=true
20
+ Project-URL: Source Code, https://github.com/langchain-ai/langchain/tree/master/libs/partners/githubcopilot-chat
21
+ Description-Content-Type: text/markdown
22
+
23
+ # LangChain GitHub Copilot Chat
24
+
25
+ This package provides a LangChain integration for **GitHub Copilot**, allowing you to use Copilot's models (including GPT-4o, Claude 3.5 Sonnet, etc.) as standard LangChain `BaseChatModel` components.
26
+
27
+ Unlike other integrations, this package mimics the official VS Code Copilot Chat extension behavior, providing access to the full suite of models available to Copilot subscribers.
28
+
29
+ ## 🚀 Features
30
+
31
+ - **Real Copilot API**: Connects to `api.githubcopilot.com` using official VS Code headers.
32
+ - **Easy Auth**: Built-in GitHub Device Flow for acquiring a valid Copilot Token.
33
+ - **Model Discovery**: Dynamic fetching of all models authorized for your account.
34
+ - **LangChain Native**: Full support for Streaming, Tool Calling, and Async operations.
35
+
36
+ ## 📦 Installation
37
+
38
+ ```bash
39
+ pip install -U langchain-githubcopilot-chat
40
+ ```
41
+
42
+ ## 🔐 Authentication
43
+
44
+ To use GitHub Copilot, you need a valid Copilot Token. You can obtain one interactively using the built-in helper:
45
+
46
+ ```python
47
+ from langchain_githubcopilot_chat import get_vscode_token
48
+
49
+ # This will prompt you to visit a GitHub URL and enter a code
50
+ token = get_vscode_token()
51
+ print(f"Your Token: {token}")
52
+ ```
53
+
54
+ For custom output handling (e.g., in GUI applications), pass a callback:
55
+
56
+ ```python
57
+ from langchain_githubcopilot_chat import get_copilot_token
58
+
59
+ def on_message(msg):
60
+ # Handle status messages (e.g., display in UI)
61
+ print(f"[Copilot] {msg}")
62
+
63
+ token = get_vscode_token(callback=on_message)
64
+ ```
65
+
66
+ Alternatively, set it as an environment variable:
67
+ ```bash
68
+ export GITHUB_TOKEN="your_copilot_token_here"
69
+ ```
70
+
71
+ ## 🛠 Usage
72
+
73
+ ### Chat Models
74
+
75
+ Access any model supported by Copilot (e.g., `gpt-4o`, `gpt-4o-mini`, `claude-3.5-sonnet`).
76
+
77
+ ```python
78
+ from langchain_githubcopilot_chat import ChatGithubCopilot
79
+
80
+ # Initialize with a specific model
81
+ llm = ChatGithubCopilot(
82
+ model="gpt-4o",
83
+ temperature=0.7
84
+ )
85
+
86
+ # Simple invocation
87
+ response = llm.invoke("Explain Quantum Entanglement in one sentence.")
88
+ print(response.content)
89
+
90
+ # Streaming
91
+ for chunk in llm.stream("Write a short poem about coding."):
92
+ print(chunk.content, end="", flush=True)
93
+ ```
94
+
95
+ ### Discovery Available Models
96
+
97
+ GitHub Copilot periodically updates its available models. You can list what's currently available for your token:
98
+
99
+ ```python
100
+ from langchain_githubcopilot_chat import get_available_models
101
+
102
+ models = get_available_models()
103
+ for model in models:
104
+ print(f"ID: {model['id']} - Name: {model.get('name')}")
105
+ ```
106
+
107
+ ### Embeddings
108
+
109
+ Use Copilot's embedding models for RAG or semantic search:
110
+
111
+ ```python
112
+ from langchain_githubcopilot_chat import GithubcopilotChatEmbeddings
113
+
114
+ embeddings = GithubcopilotChatEmbeddings(model="text-embedding-3-small")
115
+ vector = embeddings.embed_query("GitHub Copilot is awesome!")
116
+ ```
117
+
118
+ ## 📖 Advanced: Tool Calling
119
+
120
+ ```python
121
+ from pydantic import BaseModel, Field
122
+ from langchain_githubcopilot_chat import ChatGithubCopilot
123
+
124
+ class GetWeather(BaseModel):
125
+ """Get the current weather in a given location."""
126
+ location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
127
+
128
+ llm = ChatGithubCopilot(model="gpt-4o")
129
+ llm_with_tools = llm.bind_tools([GetWeather])
130
+
131
+ ai_msg = llm_with_tools.invoke("What's the weather like in Tokyo?")
132
+ print(ai_msg.tool_calls)
133
+ ```
134
+
135
+ ## ⚖️ Disclaimer
136
+
137
+ This project is an independent community integration and is not affiliated with, endorsed by, or supported by GitHub, Inc. Usage of this package must comply with GitHub's [Terms of Service](https://docs.github.com/en/site-policy/github-terms/github-terms-of-service).
@@ -0,0 +1,115 @@
1
+ # LangChain GitHub Copilot Chat
2
+
3
+ This package provides a LangChain integration for **GitHub Copilot**, allowing you to use Copilot's models (including GPT-4o, Claude 3.5 Sonnet, etc.) as standard LangChain `BaseChatModel` components.
4
+
5
+ Unlike other integrations, this package mimics the official VS Code Copilot Chat extension behavior, providing access to the full suite of models available to Copilot subscribers.
6
+
7
+ ## 🚀 Features
8
+
9
+ - **Real Copilot API**: Connects to `api.githubcopilot.com` using official VS Code headers.
10
+ - **Easy Auth**: Built-in GitHub Device Flow for acquiring a valid Copilot Token.
11
+ - **Model Discovery**: Dynamic fetching of all models authorized for your account.
12
+ - **LangChain Native**: Full support for Streaming, Tool Calling, and Async operations.
13
+
14
+ ## 📦 Installation
15
+
16
+ ```bash
17
+ pip install -U langchain-githubcopilot-chat
18
+ ```
19
+
20
+ ## 🔐 Authentication
21
+
22
+ To use GitHub Copilot, you need a valid Copilot Token. You can obtain one interactively using the built-in helper:
23
+
24
+ ```python
25
+ from langchain_githubcopilot_chat import get_vscode_token
26
+
27
+ # This will prompt you to visit a GitHub URL and enter a code
28
+ token = get_vscode_token()
29
+ print(f"Your Token: {token}")
30
+ ```
31
+
32
+ For custom output handling (e.g., in GUI applications), pass a callback:
33
+
34
+ ```python
35
+ from langchain_githubcopilot_chat import get_copilot_token
36
+
37
+ def on_message(msg):
38
+ # Handle status messages (e.g., display in UI)
39
+ print(f"[Copilot] {msg}")
40
+
41
+ token = get_vscode_token(callback=on_message)
42
+ ```
43
+
44
+ Alternatively, set it as an environment variable:
45
+ ```bash
46
+ export GITHUB_TOKEN="your_copilot_token_here"
47
+ ```
48
+
49
+ ## 🛠 Usage
50
+
51
+ ### Chat Models
52
+
53
+ Access any model supported by Copilot (e.g., `gpt-4o`, `gpt-4o-mini`, `claude-3.5-sonnet`).
54
+
55
+ ```python
56
+ from langchain_githubcopilot_chat import ChatGithubCopilot
57
+
58
+ # Initialize with a specific model
59
+ llm = ChatGithubCopilot(
60
+ model="gpt-4o",
61
+ temperature=0.7
62
+ )
63
+
64
+ # Simple invocation
65
+ response = llm.invoke("Explain Quantum Entanglement in one sentence.")
66
+ print(response.content)
67
+
68
+ # Streaming
69
+ for chunk in llm.stream("Write a short poem about coding."):
70
+ print(chunk.content, end="", flush=True)
71
+ ```
72
+
73
+ ### Discovery Available Models
74
+
75
+ GitHub Copilot periodically updates its available models. You can list what's currently available for your token:
76
+
77
+ ```python
78
+ from langchain_githubcopilot_chat import get_available_models
79
+
80
+ models = get_available_models()
81
+ for model in models:
82
+ print(f"ID: {model['id']} - Name: {model.get('name')}")
83
+ ```
84
+
85
+ ### Embeddings
86
+
87
+ Use Copilot's embedding models for RAG or semantic search:
88
+
89
+ ```python
90
+ from langchain_githubcopilot_chat import GithubcopilotChatEmbeddings
91
+
92
+ embeddings = GithubcopilotChatEmbeddings(model="text-embedding-3-small")
93
+ vector = embeddings.embed_query("GitHub Copilot is awesome!")
94
+ ```
95
+
96
+ ## 📖 Advanced: Tool Calling
97
+
98
+ ```python
99
+ from pydantic import BaseModel, Field
100
+ from langchain_githubcopilot_chat import ChatGithubCopilot
101
+
102
+ class GetWeather(BaseModel):
103
+ """Get the current weather in a given location."""
104
+ location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
105
+
106
+ llm = ChatGithubCopilot(model="gpt-4o")
107
+ llm_with_tools = llm.bind_tools([GetWeather])
108
+
109
+ ai_msg = llm_with_tools.invoke("What's the weather like in Tokyo?")
110
+ print(ai_msg.tool_calls)
111
+ ```
112
+
113
+ ## ⚖️ Disclaimer
114
+
115
+ This project is an independent community integration and is not affiliated with, endorsed by, or supported by GitHub, Inc. Usage of this package must comply with GitHub's [Terms of Service](https://docs.github.com/en/site-policy/github-terms/github-terms-of-service).
@@ -0,0 +1,216 @@
1
+ """Authentication utilities for GitHub Copilot."""
2
+
3
+ from __future__ import annotations
4
+
5
+ import asyncio
6
+ import json
7
+ import os
8
+ import time
9
+ from typing import Callable, Dict, Optional, Tuple, Union
10
+
11
+ import httpx
12
+
13
+ CLIENT_ID = "Iv1.b507a08c87ecfe98"
14
+ CACHE_PATH = os.path.expanduser("~/.github-copilot-chat.json")
15
+
16
+ # Shared Copilot headers
17
+ COPILOT_EDITOR_VERSION = "vscode/1.104.1"
18
+ COPILOT_PLUGIN_VERSION = "copilot-chat/0.26.7"
19
+ COPILOT_INTEGRATION_ID = "vscode-chat"
20
+ COPILOT_USER_AGENT = "GitHubCopilotChat/0.26.7"
21
+
22
+ COPILOT_DEFAULT_HEADERS = {
23
+ "Copilot-Integration-Id": COPILOT_INTEGRATION_ID,
24
+ "User-Agent": COPILOT_USER_AGENT,
25
+ "Editor-Version": COPILOT_EDITOR_VERSION,
26
+ "Editor-Plugin-Version": COPILOT_PLUGIN_VERSION,
27
+ "editor-version": COPILOT_EDITOR_VERSION,
28
+ "editor-plugin-version": COPILOT_PLUGIN_VERSION,
29
+ "copilot-vision-request": "true",
30
+ }
31
+
32
+ # In-memory lock for token refresh to prevent concurrent refresh attempts
33
+ _token_refresh_lock: Optional[asyncio.Lock] = None
34
+ _sync_token_refresh_lock: bool = False
35
+
36
+
37
+ def _get_token_refresh_lock() -> asyncio.Lock:
38
+ """Get or create the async token refresh lock."""
39
+ global _token_refresh_lock
40
+ if _token_refresh_lock is None:
41
+ _token_refresh_lock = asyncio.Lock()
42
+ return _token_refresh_lock
43
+
44
+
45
+ def save_tokens_to_cache(
46
+ github_token: str,
47
+ copilot_token: str,
48
+ expires_at: Optional[float] = None,
49
+ ) -> None:
50
+ """Save tokens to cache with optional expiration time."""
51
+ try:
52
+ with open(CACHE_PATH, "w") as f:
53
+ json.dump(
54
+ {
55
+ "github_token": github_token,
56
+ "copilot_token": copilot_token,
57
+ "expires_at": expires_at,
58
+ },
59
+ f,
60
+ indent=2,
61
+ )
62
+ except Exception:
63
+ pass
64
+
65
+
66
+ def load_tokens_from_cache() -> Dict[str, str]:
67
+ """Load tokens from cache, checking expiration if present."""
68
+ try:
69
+ with open(CACHE_PATH, "r") as f:
70
+ data = json.load(f)
71
+ # Check if token has expired
72
+ if data.get("expires_at"):
73
+ if time.time() > data["expires_at"]:
74
+ # Token expired, return empty
75
+ return {}
76
+ return data
77
+ except Exception:
78
+ return {}
79
+
80
+
81
+ def fetch_copilot_token(github_token: str) -> Tuple[Optional[str], Optional[float]]:
82
+ """Fetch copilot token and return it with expiration time.
83
+
84
+ Returns:
85
+ Tuple of (token, expires_at_timestamp). expires_at is None if not provided.
86
+ """
87
+ headers = {
88
+ "Authorization": f"token {github_token}",
89
+ "Accept": "application/json",
90
+ **COPILOT_DEFAULT_HEADERS,
91
+ }
92
+ with httpx.Client() as client:
93
+ res = client.get(
94
+ "https://api.github.com/copilot_internal/v2/token",
95
+ headers=headers,
96
+ )
97
+ if res.status_code == 200:
98
+ data = res.json()
99
+ token = data.get("token")
100
+ # Copilot tokens typically expire in a few hours
101
+ # The API may return 'expires_at' as a Unix timestamp
102
+ expires_at = data.get("expires_at")
103
+ return token, expires_at
104
+ return None, None
105
+
106
+
107
+ async def afetch_copilot_token(
108
+ github_token: str,
109
+ ) -> Tuple[Optional[str], Optional[float]]:
110
+ """Async fetch copilot token and return it with expiration time.
111
+
112
+ Returns:
113
+ Tuple of (token, expires_at_timestamp). expires_at is None if not provided.
114
+ """
115
+ headers = {
116
+ "Authorization": f"token {github_token}",
117
+ "Accept": "application/json",
118
+ **COPILOT_DEFAULT_HEADERS,
119
+ }
120
+ async with httpx.AsyncClient() as client:
121
+ res = await client.get(
122
+ "https://api.github.com/copilot_internal/v2/token",
123
+ headers=headers,
124
+ )
125
+ if res.status_code == 200:
126
+ data = res.json()
127
+ token = data.get("token")
128
+ expires_at = data.get("expires_at")
129
+ return token, expires_at
130
+ return None, None
131
+
132
+
133
+ def get_copilot_token(
134
+ client_id: str = CLIENT_ID,
135
+ callback: Optional[Callable[[str], None]] = None,
136
+ return_both: bool = False,
137
+ ) -> Union[Optional[str], Tuple[Optional[str], Optional[str]]]:
138
+ """
139
+ Authenticate via GitHub Device Flow to get a Copilot Token.
140
+ This function will block and wait for the user to complete the
141
+ authorization in their browser.
142
+
143
+ Args:
144
+ client_id: The GitHub OAuth App Client ID to use. Defaults
145
+ to the VS Code Copilot Chat client ID.
146
+ callback: Optional callable that receives status messages instead of
147
+ printing them. If None, messages are printed to stdout.
148
+
149
+ Returns:
150
+ The fetched Copilot Token string, or None if authentication failed.
151
+ """
152
+
153
+ def _print(msg: str) -> None:
154
+ if callback:
155
+ callback(msg)
156
+ else:
157
+ print(msg) # noqa: T201
158
+
159
+ _print("1. Requesting device code from GitHub...")
160
+ with httpx.Client() as client:
161
+ res = client.post(
162
+ "https://github.com/login/device/code",
163
+ headers={"Accept": "application/json"},
164
+ data={"client_id": client_id, "scope": "read:user"},
165
+ )
166
+ res.raise_for_status()
167
+ data = res.json()
168
+
169
+ device_code = data.get("device_code")
170
+ user_code = data.get("user_code")
171
+ verification_uri = data.get("verification_uri")
172
+ interval = data.get("interval", 5)
173
+
174
+ _print("\n==========================================")
175
+ _print(f"Please open your browser to: {verification_uri}")
176
+ _print(f"And enter the authorization code: {user_code}")
177
+ _print("==========================================\n")
178
+ _print(f"Waiting for authorization (checking every {interval} seconds)...")
179
+
180
+ access_token = None
181
+ with httpx.Client() as client:
182
+ while True:
183
+ token_res = client.post(
184
+ "https://github.com/login/oauth/access_token",
185
+ headers={"Accept": "application/json"},
186
+ data={
187
+ "client_id": client_id,
188
+ "device_code": device_code,
189
+ "grant_type": "urn:ietf:params:oauth:grant-type:device_code",
190
+ },
191
+ ).json()
192
+
193
+ if "access_token" in token_res:
194
+ access_token = token_res["access_token"]
195
+ _print("\n✅ Authorization successful! Exchanging for Copilot Token...")
196
+ break
197
+ elif token_res.get("error") == "authorization_pending":
198
+ time.sleep(interval)
199
+ else:
200
+ _print(f"\n❌ Authorization failed: {token_res}")
201
+ return None
202
+
203
+ # Exchange the standard access token for a Copilot internal token
204
+ copilot_token, expires_at = fetch_copilot_token(access_token)
205
+
206
+ if copilot_token:
207
+ save_tokens_to_cache(access_token, copilot_token, expires_at)
208
+ _print("🎉 Successfully acquired Copilot Token!")
209
+ if return_both:
210
+ return access_token, copilot_token
211
+ return copilot_token
212
+ else:
213
+ _print("❌ Failed to acquire Copilot Token!")
214
+ if return_both:
215
+ return access_token, None
216
+ return None
@@ -42,7 +42,20 @@ from langchain_core.output_parsers.openai_tools import (
42
42
  from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
43
43
  from langchain_core.tools import BaseTool
44
44
  from langchain_core.utils.function_calling import convert_to_openai_tool
45
- from pydantic import Field, SecretStr, model_validator
45
+ from pydantic import Field, PrivateAttr, SecretStr, model_validator
46
+
47
+ from langchain_githubcopilot_chat.auth import (
48
+ COPILOT_DEFAULT_HEADERS,
49
+ COPILOT_EDITOR_VERSION,
50
+ COPILOT_INTEGRATION_ID,
51
+ COPILOT_PLUGIN_VERSION,
52
+ COPILOT_USER_AGENT,
53
+ _get_token_refresh_lock,
54
+ afetch_copilot_token,
55
+ fetch_copilot_token,
56
+ load_tokens_from_cache,
57
+ save_tokens_to_cache,
58
+ )
46
59
 
47
60
  # ---------------------------------------------------------------------------
48
61
  # Helpers
@@ -59,21 +72,6 @@ _ROLE_MAP = {
59
72
  _GITHUB_COPILOT_BASE_URL = "https://api.githubcopilot.com"
60
73
  _INFERENCE_PATH = "/chat/completions"
61
74
 
62
- COPILOT_EDITOR_VERSION = "vscode/1.104.1"
63
- COPILOT_PLUGIN_VERSION = "copilot-chat/0.26.7"
64
- COPILOT_INTEGRATION_ID = "vscode-chat"
65
- COPILOT_USER_AGENT = "GitHubCopilotChat/0.26.7"
66
-
67
- COPILOT_DEFAULT_HEADERS = {
68
- "Copilot-Integration-Id": COPILOT_INTEGRATION_ID,
69
- "User-Agent": COPILOT_USER_AGENT,
70
- "Editor-Version": COPILOT_EDITOR_VERSION,
71
- "Editor-Plugin-Version": COPILOT_PLUGIN_VERSION,
72
- "editor-version": COPILOT_EDITOR_VERSION,
73
- "editor-plugin-version": COPILOT_PLUGIN_VERSION,
74
- "copilot-vision-request": "true",
75
- }
76
-
77
75
 
78
76
  def _message_to_dict(message: BaseMessage) -> Dict[str, Any]:
79
77
  """Convert a LangChain message to the GitHub Models API message format."""
@@ -206,11 +204,37 @@ def _build_ai_message(
206
204
 
207
205
  usage_metadata: Optional[UsageMetadata] = None
208
206
  if usage:
209
- usage_metadata = UsageMetadata(
210
- input_tokens=usage.get("prompt_tokens", 0),
211
- output_tokens=usage.get("completion_tokens", 0),
212
- total_tokens=usage.get("total_tokens", 0),
213
- )
207
+ input_token_details: Dict[str, Any] = {}
208
+ if "prompt_tokens_details" in usage:
209
+ if "cached_tokens" in usage["prompt_tokens_details"]:
210
+ input_token_details["cache_read"] = usage["prompt_tokens_details"][
211
+ "cached_tokens"
212
+ ]
213
+
214
+ output_token_details: Dict[str, Any] = {}
215
+ if "reasoning_tokens" in usage:
216
+ output_token_details["reasoning"] = usage["reasoning_tokens"]
217
+ if "completion_tokens_details" in usage:
218
+ if "accepted_prediction_tokens" in usage["completion_tokens_details"]:
219
+ output_token_details["accepted_prediction"] = usage[
220
+ "completion_tokens_details"
221
+ ]["accepted_prediction_tokens"]
222
+ if "rejected_prediction_tokens" in usage["completion_tokens_details"]:
223
+ output_token_details["rejected_prediction"] = usage[
224
+ "completion_tokens_details"
225
+ ]["rejected_prediction_tokens"]
226
+
227
+ kwargs = {
228
+ "input_tokens": usage.get("prompt_tokens", 0),
229
+ "output_tokens": usage.get("completion_tokens", 0),
230
+ "total_tokens": usage.get("total_tokens", 0),
231
+ }
232
+ if input_token_details:
233
+ kwargs["input_token_details"] = input_token_details
234
+ if output_token_details:
235
+ kwargs["output_token_details"] = output_token_details
236
+
237
+ usage_metadata = UsageMetadata(**kwargs)
214
238
 
215
239
  response_metadata: Dict[str, Any] = {
216
240
  "finish_reason": finish_reason,
@@ -455,21 +479,28 @@ class ChatGithubCopilot(BaseChatModel):
455
479
  # Validators / setup
456
480
  # ------------------------------------------------------------------
457
481
 
482
+ _cached_copilot_token: Optional[str] = PrivateAttr(default=None)
483
+
458
484
  @model_validator(mode="before")
459
485
  @classmethod
460
486
  def _validate_token(cls, values: Dict[str, Any]) -> Dict[str, Any]:
461
- """Resolve the GitHub token from the environment if not supplied.
487
+ """Resolve the GitHub token from the environment or cache if not supplied.
462
488
 
463
489
  Priority order:
464
490
  1. Explicitly passed ``github_token``
465
491
  2. Explicitly passed ``api_key`` alias
466
492
  3. ``GITHUB_TOKEN`` environment variable
493
+ 4. ``~/.github-copilot-chat.json`` cache file
467
494
  """
468
495
  token = values.get("github_token") or values.get("api_key")
469
496
  if not token:
470
497
  token = os.environ.get("GITHUB_TOKEN")
471
498
  if token:
472
499
  values["github_token"] = token
500
+ else:
501
+ tokens = load_tokens_from_cache()
502
+ if "github_token" in tokens:
503
+ values["github_token"] = tokens["github_token"]
473
504
  return values
474
505
 
475
506
  # ------------------------------------------------------------------
@@ -479,16 +510,74 @@ class ChatGithubCopilot(BaseChatModel):
479
510
  @property
480
511
  def _token(self) -> str:
481
512
  """Return the raw GitHub token string."""
513
+ if self._cached_copilot_token:
514
+ return self._cached_copilot_token
515
+
516
+ token = None
482
517
  if self.github_token:
483
- return self.github_token.get_secret_value()
484
- env_token = os.environ.get("GITHUB_TOKEN", "")
485
- if not env_token:
518
+ token = self.github_token.get_secret_value()
519
+ elif os.environ.get("GITHUB_TOKEN"):
520
+ token = os.environ.get("GITHUB_TOKEN")
521
+ else:
522
+ tokens = load_tokens_from_cache()
523
+ if "copilot_token" in tokens:
524
+ self._cached_copilot_token = tokens["copilot_token"]
525
+ return tokens["copilot_token"]
526
+ elif "github_token" in tokens:
527
+ token = tokens["github_token"]
528
+
529
+ if not token:
486
530
  raise ValueError(
487
- "A GitHub token is required. Set the GITHUB_TOKEN environment "
488
- "variable or pass ``github_token`` when instantiating "
489
- "ChatGithubCopilot."
531
+ "A GitHub token is required. Set the GITHUB_TOKEN environment "
532
+ "variable, pass ``github_token``, or run ``get_copilot_token()`` "
533
+ "to authenticate."
490
534
  )
491
- return env_token
535
+
536
+ # If the token is a standard GitHub token, exchange it
537
+ if token.startswith(("gho_", "ghp_", "ghu_")):
538
+ self._refresh_token_sync(token)
539
+ if self._cached_copilot_token:
540
+ return self._cached_copilot_token
541
+
542
+ return token
543
+
544
+ def _refresh_token_sync(self, github_token: Optional[str] = None) -> None:
545
+ # Use lock to prevent concurrent token refresh
546
+ global _sync_token_refresh_lock
547
+ if _sync_token_refresh_lock:
548
+ return
549
+ _sync_token_refresh_lock = True
550
+ try:
551
+ token_to_use = github_token or (
552
+ self.github_token.get_secret_value() if self.github_token else None
553
+ )
554
+ if not token_to_use:
555
+ tokens = load_tokens_from_cache()
556
+ token_to_use = tokens.get("github_token")
557
+
558
+ if token_to_use:
559
+ new_token, expires_at = fetch_copilot_token(token_to_use)
560
+ if new_token:
561
+ self._cached_copilot_token = new_token
562
+ save_tokens_to_cache(token_to_use, new_token, expires_at)
563
+ finally:
564
+ _sync_token_refresh_lock = False
565
+
566
+ async def _refresh_token_async(self, github_token: Optional[str] = None) -> None:
567
+ lock = _get_token_refresh_lock()
568
+ async with lock:
569
+ token_to_use = github_token or (
570
+ self.github_token.get_secret_value() if self.github_token else None
571
+ )
572
+ if not token_to_use:
573
+ tokens = load_tokens_from_cache()
574
+ token_to_use = tokens.get("github_token")
575
+
576
+ if token_to_use:
577
+ new_token, expires_at = await afetch_copilot_token(token_to_use)
578
+ if new_token:
579
+ self._cached_copilot_token = new_token
580
+ save_tokens_to_cache(token_to_use, new_token, expires_at)
492
581
 
493
582
  @property
494
583
  def _inference_url(self) -> str:
@@ -584,6 +673,8 @@ class ChatGithubCopilot(BaseChatModel):
584
673
 
585
674
  def _do_request(self, payload: Dict[str, Any]) -> Dict[str, Any]:
586
675
  """Perform a synchronous (non-streaming) HTTP POST with retries."""
676
+ import time
677
+
587
678
  headers = self._build_headers()
588
679
  last_exc: Optional[Exception] = None
589
680
  for attempt in range(self.max_retries + 1):
@@ -594,6 +685,18 @@ class ChatGithubCopilot(BaseChatModel):
594
685
  json=payload,
595
686
  timeout=self.timeout,
596
687
  )
688
+
689
+ # Handle 401 Unauthorized for token refresh
690
+ if response.status_code == 401:
691
+ self._refresh_token_sync()
692
+ headers = self._build_headers()
693
+ response = httpx.post(
694
+ self._inference_url,
695
+ headers=headers,
696
+ json=payload,
697
+ timeout=self.timeout,
698
+ )
699
+
597
700
  response.raise_for_status()
598
701
  return response.json()
599
702
  except (httpx.TimeoutException, httpx.TransportError) as exc:
@@ -601,12 +704,13 @@ class ChatGithubCopilot(BaseChatModel):
601
704
  if attempt == self.max_retries:
602
705
  raise
603
706
  except httpx.HTTPStatusError as exc:
604
- # Don't retry on 4xx client errors
605
707
  if exc.response.status_code < 500:
606
708
  raise
607
709
  last_exc = exc
608
710
  if attempt == self.max_retries:
609
711
  raise
712
+ if attempt < self.max_retries:
713
+ time.sleep(2**attempt)
610
714
  raise RuntimeError("Unexpected retry loop exit") from last_exc
611
715
 
612
716
  def _do_stream(self, payload: Dict[str, Any]) -> Iterator[Dict[str, Any]]:
@@ -633,6 +737,8 @@ class ChatGithubCopilot(BaseChatModel):
633
737
 
634
738
  async def _do_request_async(self, payload: Dict[str, Any]) -> Dict[str, Any]:
635
739
  """Perform an asynchronous (non-streaming) HTTP POST with retries."""
740
+ import asyncio
741
+
636
742
  headers = self._build_headers()
637
743
  last_exc: Optional[Exception] = None
638
744
  async with httpx.AsyncClient(timeout=self.timeout) as client:
@@ -643,6 +749,16 @@ class ChatGithubCopilot(BaseChatModel):
643
749
  headers=headers,
644
750
  json=payload,
645
751
  )
752
+
753
+ if response.status_code == 401:
754
+ await self._refresh_token_async()
755
+ headers = self._build_headers()
756
+ response = await client.post(
757
+ self._inference_url,
758
+ headers=headers,
759
+ json=payload,
760
+ )
761
+
646
762
  response.raise_for_status()
647
763
  return response.json()
648
764
  except (httpx.TimeoutException, httpx.TransportError) as exc:
@@ -655,6 +771,8 @@ class ChatGithubCopilot(BaseChatModel):
655
771
  last_exc = exc
656
772
  if attempt == self.max_retries:
657
773
  raise
774
+ if attempt < self.max_retries:
775
+ await asyncio.sleep(2**attempt)
658
776
  raise RuntimeError("Unexpected retry loop exit") from last_exc
659
777
 
660
778
  async def _do_stream_async(
@@ -9,24 +9,13 @@ import httpx
9
9
  from langchain_core.embeddings import Embeddings
10
10
  from pydantic import BaseModel, Field, SecretStr, model_validator
11
11
 
12
+ from langchain_githubcopilot_chat.auth import (
13
+ COPILOT_DEFAULT_HEADERS,
14
+ )
15
+
12
16
  _GITHUB_COPILOT_BASE_URL = "https://api.githubcopilot.com"
13
17
  _EMBEDDINGS_PATH = "/embeddings"
14
18
 
15
- COPILOT_EDITOR_VERSION = "vscode/1.104.1"
16
- COPILOT_PLUGIN_VERSION = "copilot-chat/0.26.7"
17
- COPILOT_INTEGRATION_ID = "vscode-chat"
18
- COPILOT_USER_AGENT = "GitHubCopilotChat/0.26.7"
19
-
20
- COPILOT_DEFAULT_HEADERS = {
21
- "Copilot-Integration-Id": COPILOT_INTEGRATION_ID,
22
- "User-Agent": COPILOT_USER_AGENT,
23
- "Editor-Version": COPILOT_EDITOR_VERSION,
24
- "Editor-Plugin-Version": COPILOT_PLUGIN_VERSION,
25
- "editor-version": COPILOT_EDITOR_VERSION,
26
- "editor-plugin-version": COPILOT_PLUGIN_VERSION,
27
- "copilot-vision-request": "true",
28
- }
29
-
30
19
 
31
20
  class GithubcopilotChatEmbeddings(BaseModel, Embeddings):
32
21
  """GitHub Copilot Chat embedding model integration via the GitHub Models API.
@@ -4,7 +4,7 @@ build-backend = "poetry.core.masonry.api"
4
4
 
5
5
  [tool.poetry]
6
6
  name = "langchain-githubcopilot-chat"
7
- version = "0.2.0"
7
+ version = "0.4.0"
8
8
  description = "An integration package connecting GithubcopilotChat and LangChain"
9
9
  authors = ["YIhan Wu <iumm@ibat.ac.cn>"]
10
10
  readme = "README.md"
@@ -1,69 +0,0 @@
1
- Metadata-Version: 2.1
2
- Name: langchain-githubcopilot-chat
3
- Version: 0.2.0
4
- Summary: An integration package connecting GithubcopilotChat and LangChain
5
- Home-page: https://github.com/langchain-ai/langchain
6
- License: MIT
7
- Author: YIhan Wu
8
- Author-email: iumm@ibat.ac.cn
9
- Requires-Python: >=3.10,<4.0
10
- Classifier: License :: OSI Approved :: MIT License
11
- Classifier: Programming Language :: Python :: 3
12
- Classifier: Programming Language :: Python :: 3.10
13
- Classifier: Programming Language :: Python :: 3.11
14
- Classifier: Programming Language :: Python :: 3.12
15
- Classifier: Programming Language :: Python :: 3.13
16
- Requires-Dist: httpx (>=0.28.1)
17
- Requires-Dist: langchain-core (>=1.1.0,<2.0.0)
18
- Project-URL: Repository, https://github.com/langchain-ai/langchain
19
- Project-URL: Release Notes, https://github.com/langchain-ai/langchain/releases?q=tag%3A%22githubcopilot-chat%3D%3D0%22&expanded=true
20
- Project-URL: Source Code, https://github.com/langchain-ai/langchain/tree/master/libs/partners/githubcopilot-chat
21
- Description-Content-Type: text/markdown
22
-
23
- # langchain-githubcopilot-chat
24
-
25
- This package contains the LangChain integration with GithubcopilotChat
26
-
27
- ## Installation
28
-
29
- ```bash
30
- pip install -U langchain-githubcopilot-chat
31
- ```
32
-
33
- And you should configure credentials by setting the following environment variables:
34
-
35
- * TODO: fill this out
36
-
37
- ## Chat Models
38
-
39
- `ChatGithubcopilotChat` class exposes chat models from GithubcopilotChat.
40
-
41
- ```python
42
- from langchain_githubcopilot_chat import ChatGithubcopilotChat
43
-
44
- llm = ChatGithubcopilotChat()
45
- llm.invoke("Sing a ballad of LangChain.")
46
- ```
47
-
48
- ## Embeddings
49
-
50
- `GithubcopilotChatEmbeddings` class exposes embeddings from GithubcopilotChat.
51
-
52
- ```python
53
- from langchain_githubcopilot_chat import GithubcopilotChatEmbeddings
54
-
55
- embeddings = GithubcopilotChatEmbeddings()
56
- embeddings.embed_query("What is the meaning of life?")
57
- ```
58
-
59
- ## LLMs
60
-
61
- `GithubcopilotChatLLM` class exposes LLMs from GithubcopilotChat.
62
-
63
- ```python
64
- from langchain_githubcopilot_chat import GithubcopilotChatLLM
65
-
66
- llm = GithubcopilotChatLLM()
67
- llm.invoke("The meaning of life is")
68
- ```
69
-
@@ -1,46 +0,0 @@
1
- # langchain-githubcopilot-chat
2
-
3
- This package contains the LangChain integration with GithubcopilotChat
4
-
5
- ## Installation
6
-
7
- ```bash
8
- pip install -U langchain-githubcopilot-chat
9
- ```
10
-
11
- And you should configure credentials by setting the following environment variables:
12
-
13
- * TODO: fill this out
14
-
15
- ## Chat Models
16
-
17
- `ChatGithubcopilotChat` class exposes chat models from GithubcopilotChat.
18
-
19
- ```python
20
- from langchain_githubcopilot_chat import ChatGithubcopilotChat
21
-
22
- llm = ChatGithubcopilotChat()
23
- llm.invoke("Sing a ballad of LangChain.")
24
- ```
25
-
26
- ## Embeddings
27
-
28
- `GithubcopilotChatEmbeddings` class exposes embeddings from GithubcopilotChat.
29
-
30
- ```python
31
- from langchain_githubcopilot_chat import GithubcopilotChatEmbeddings
32
-
33
- embeddings = GithubcopilotChatEmbeddings()
34
- embeddings.embed_query("What is the meaning of life?")
35
- ```
36
-
37
- ## LLMs
38
-
39
- `GithubcopilotChatLLM` class exposes LLMs from GithubcopilotChat.
40
-
41
- ```python
42
- from langchain_githubcopilot_chat import GithubcopilotChatLLM
43
-
44
- llm = GithubcopilotChatLLM()
45
- llm.invoke("The meaning of life is")
46
- ```
@@ -1,85 +0,0 @@
1
- import time
2
- from typing import Optional
3
-
4
- import httpx
5
-
6
- CLIENT_ID = "Iv1.b507a08c87ecfe98"
7
-
8
-
9
- def get_copilot_token(client_id: str = CLIENT_ID) -> Optional[str]:
10
- """
11
- Authenticate via GitHub Device Flow to get a Copilot Token.
12
- This function will block and wait for the user to complete the
13
- authorization in their browser.
14
-
15
- Args:
16
- client_id: The GitHub OAuth App Client ID to use. Defaults
17
- to the VS Code Copilot Chat client ID.
18
-
19
- Returns:
20
- The fetched Copilot Token string, or None if authentication failed.
21
- """
22
- print("1. Requesting device code from GitHub...") # noqa: T201 # noqa: T201
23
- with httpx.Client() as client:
24
- res = client.post(
25
- "https://github.com/login/device/code",
26
- headers={"Accept": "application/json"},
27
- data={"client_id": client_id, "scope": "read:user"},
28
- )
29
- res.raise_for_status()
30
- data = res.json()
31
-
32
- device_code = data.get("device_code")
33
- user_code = data.get("user_code")
34
- verification_uri = data.get("verification_uri")
35
- interval = data.get("interval", 5)
36
-
37
- print("\n==========================================") # noqa: T201
38
- print(f"Please open your browser to: {verification_uri}") # noqa: T201
39
- print(f"And enter the authorization code: {user_code}") # noqa: T201
40
- print("==========================================\n") # noqa: T201
41
- print(f"Waiting for authorization (checking every {interval} seconds)...") # noqa: T201
42
-
43
- access_token = None
44
- with httpx.Client() as client:
45
- while True:
46
- token_res = client.post(
47
- "https://github.com/login/oauth/access_token",
48
- headers={"Accept": "application/json"},
49
- data={
50
- "client_id": client_id,
51
- "device_code": device_code,
52
- "grant_type": "urn:ietf:params:oauth:grant-type:device_code",
53
- },
54
- ).json()
55
-
56
- if "access_token" in token_res:
57
- access_token = token_res["access_token"]
58
- print( # noqa: T201 # noqa: T201
59
- "\n✅ Authorization successful! Exchanging for Copilot Token..."
60
- )
61
- break
62
- elif token_res.get("error") == "authorization_pending":
63
- time.sleep(interval)
64
- else:
65
- print(f"\n❌ Authorization failed: {token_res}") # noqa: T201 # noqa: T201
66
- return None
67
-
68
- # Exchange the standard access token for a Copilot internal token
69
- copilot_res = client.get(
70
- "https://api.github.com/copilot_internal/v2/token",
71
- headers={
72
- "Authorization": f"token {access_token}",
73
- "Accept": "application/json",
74
- "Editor-Version": "vscode/1.104.1",
75
- "Editor-Plugin-Version": "copilot-chat/0.26.7",
76
- },
77
- )
78
-
79
- if copilot_res.status_code == 200:
80
- copilot_token = copilot_res.json().get("token")
81
- print("🎉 Successfully acquired Copilot Token!") # noqa: T201 # noqa: T201
82
- return copilot_token
83
- else:
84
- print(f"❌ Failed to acquire Copilot Token: {copilot_res.text}") # noqa: T201 # noqa: T201
85
- return None