langchain-postgres 0.0.12__tar.gz → 0.0.14__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,170 @@
1
+ Metadata-Version: 2.1
2
+ Name: langchain-postgres
3
+ Version: 0.0.14
4
+ Summary: An integration package connecting Postgres and LangChain
5
+ Home-page: https://github.com/langchain-ai/langchain-postgres
6
+ License: MIT
7
+ Requires-Python: >=3.9,<4.0
8
+ Classifier: License :: OSI Approved :: MIT License
9
+ Classifier: Programming Language :: Python :: 3
10
+ Classifier: Programming Language :: Python :: 3.9
11
+ Classifier: Programming Language :: Python :: 3.10
12
+ Classifier: Programming Language :: Python :: 3.11
13
+ Classifier: Programming Language :: Python :: 3.12
14
+ Requires-Dist: asyncpg (>=0.30.0,<0.31.0)
15
+ Requires-Dist: langchain-core (>=0.2.13,<0.4.0)
16
+ Requires-Dist: numpy (>=1.21,<2.0)
17
+ Requires-Dist: pgvector (>=0.2.5,<0.4)
18
+ Requires-Dist: psycopg (>=3,<4)
19
+ Requires-Dist: psycopg-pool (>=3.2.1,<4.0.0)
20
+ Requires-Dist: sqlalchemy (>=2,<3)
21
+ Project-URL: Repository, https://github.com/langchain-ai/langchain-postgres
22
+ Project-URL: Source Code, https://github.com/langchain-ai/langchain-postgres/tree/master/langchain_postgres
23
+ Description-Content-Type: text/markdown
24
+
25
+ # langchain-postgres
26
+
27
+ [![Release Notes](https://img.shields.io/github/release/langchain-ai/langchain-postgres)](https://github.com/langchain-ai/langchain-postgres/releases)
28
+ [![CI](https://github.com/langchain-ai/langchain-postgres/actions/workflows/ci.yml/badge.svg)](https://github.com/langchain-ai/langchain-postgres/actions/workflows/ci.yml)
29
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
30
+ [![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai)
31
+ [![](https://dcbadge.vercel.app/api/server/6adMQxSpJS?compact=true&style=flat)](https://discord.gg/6adMQxSpJS)
32
+ [![Open Issues](https://img.shields.io/github/issues-raw/langchain-ai/langchain-postgres)](https://github.com/langchain-ai/langchain-postgres/issues)
33
+
34
+ The `langchain-postgres` package implementations of core LangChain abstractions using `Postgres`.
35
+
36
+ The package is released under the MIT license.
37
+
38
+ Feel free to use the abstraction as provided or else modify them / extend them as appropriate for your own application.
39
+
40
+ ## Requirements
41
+
42
+ The package supports the [asyncpg](https://github.com/MagicStack/asyncpg) and [psycogp3](https://www.psycopg.org/psycopg3/) drivers.
43
+
44
+ ## Installation
45
+
46
+ ```bash
47
+ pip install -U langchain-postgres
48
+ ```
49
+
50
+ ## Usage
51
+
52
+ ### Vectorstore
53
+
54
+ > [!WARNING]
55
+ > In v0.0.14+, `PGVector` is deprecated. Please migrate to `PGVectorStore`
56
+ > Version 0.0.14+ has not been released yet, but you can test version of the vectorstore on the main branch. Until official release do not use in production.
57
+ > for improved performance and manageability.
58
+ > See the [migration guide](https://github.com/langchain-ai/langchain-postgres/blob/main/examples/migrate_pgvector_to_pgvectorstore.md) for details on how to migrate from `PGVector` to `PGVectorStore`.
59
+
60
+ For a detailed example on `PGVectorStore` see [here](https://github.com/langchain-ai/langchain-postgres/blob/main/examples/pg_vectorstore.ipynb).
61
+
62
+ ```python
63
+ from langchain_core.documents import Document
64
+ from langchain_core.embeddings import DeterministicFakeEmbedding
65
+ from langchain_postgres import PGEngine, PGVectorStore
66
+
67
+ # Replace the connection string with your own Postgres connection string
68
+ CONNECTION_STRING = "postgresql+psycopg3://langchain:langchain@localhost:6024/langchain"
69
+ engine = PGEngine.from_connection_string(url=CONNECTION_STRING)
70
+
71
+ # Replace the vector size with your own vector size
72
+ VECTOR_SIZE = 768
73
+ embedding = DeterministicFakeEmbedding(size=VECTOR_SIZE)
74
+
75
+ TABLE_NAME = "my_doc_collection"
76
+
77
+ engine.init_vectorstore_table(
78
+ table_name=TABLE_NAME,
79
+ vector_size=VECTOR_SIZE,
80
+ )
81
+
82
+ store = PGVectorStore.create_sync(
83
+ engine=engine,
84
+ table_name=TABLE_NAME,
85
+ embedding_service=embedding,
86
+ )
87
+
88
+ docs = [
89
+ Document(page_content="Apples and oranges"),
90
+ Document(page_content="Cars and airplanes"),
91
+ Document(page_content="Train")
92
+ ]
93
+
94
+ store.add_documents(docs)
95
+
96
+ query = "I'd like a fruit."
97
+ docs = store.similarity_search(query)
98
+ print(docs)
99
+ ```
100
+
101
+ > [!TIP]
102
+ > All synchronous functions have corresponding asynchronous functions
103
+
104
+ ### ChatMessageHistory
105
+
106
+ The chat message history abstraction helps to persist chat message history
107
+ in a postgres table.
108
+
109
+ PostgresChatMessageHistory is parameterized using a `table_name` and a `session_id`.
110
+
111
+ The `table_name` is the name of the table in the database where
112
+ the chat messages will be stored.
113
+
114
+ The `session_id` is a unique identifier for the chat session. It can be assigned
115
+ by the caller using `uuid.uuid4()`.
116
+
117
+ ```python
118
+ import uuid
119
+
120
+ from langchain_core.messages import SystemMessage, AIMessage, HumanMessage
121
+ from langchain_postgres import PostgresChatMessageHistory
122
+ import psycopg
123
+
124
+ # Establish a synchronous connection to the database
125
+ # (or use psycopg.AsyncConnection for async)
126
+ conn_info = ... # Fill in with your connection info
127
+ sync_connection = psycopg.connect(conn_info)
128
+
129
+ # Create the table schema (only needs to be done once)
130
+ table_name = "chat_history"
131
+ PostgresChatMessageHistory.create_tables(sync_connection, table_name)
132
+
133
+ session_id = str(uuid.uuid4())
134
+
135
+ # Initialize the chat history manager
136
+ chat_history = PostgresChatMessageHistory(
137
+ table_name,
138
+ session_id,
139
+ sync_connection=sync_connection
140
+ )
141
+
142
+ # Add messages to the chat history
143
+ chat_history.add_messages([
144
+ SystemMessage(content="Meow"),
145
+ AIMessage(content="woof"),
146
+ HumanMessage(content="bark"),
147
+ ])
148
+
149
+ print(chat_history.messages)
150
+ ```
151
+
152
+ ## Google Cloud Integrations
153
+
154
+ [Google Cloud](https://python.langchain.com/docs/integrations/providers/google/) provides Vector Store, Chat Message History, and Data Loader integrations for [AlloyDB](https://cloud.google.com/alloydb) and [Cloud SQL](https://cloud.google.com/sql) for PostgreSQL databases via the following PyPi packages:
155
+
156
+ * [`langchain-google-alloydb-pg`](https://github.com/googleapis/langchain-google-alloydb-pg-python)
157
+
158
+ * [`langchain-google-cloud-sql-pg`](https://github.com/googleapis/langchain-google-cloud-sql-pg-python)
159
+
160
+ Using the Google Cloud integrations provides the following benefits:
161
+
162
+ - **Enhanced Security**: Securely connect to Google Cloud databases utilizing IAM for authorization and database authentication without needing to manage SSL certificates, configure firewall rules, or enable authorized networks.
163
+ - **Simplified and Secure Connections:** Connect to Google Cloud databases effortlessly using the instance name instead of complex connection strings. The integrations creates a secure connection pool that can be easily shared across your application using the `engine` object.
164
+
165
+ | Vector Store | Metadata filtering | Async support | Schema Flexibility | Improved metadata handling | Hybrid Search |
166
+ |--------------------------|--------------------|----------------|--------------------|----------------------------|---------------|
167
+ | Google AlloyDB | ✓ | ✓ | ✓ | ✓ | ✗ |
168
+ | Google Cloud SQL Postgres| ✓ | ✓ | ✓ | ✓ | ✗ |
169
+
170
+
@@ -0,0 +1,145 @@
1
+ # langchain-postgres
2
+
3
+ [![Release Notes](https://img.shields.io/github/release/langchain-ai/langchain-postgres)](https://github.com/langchain-ai/langchain-postgres/releases)
4
+ [![CI](https://github.com/langchain-ai/langchain-postgres/actions/workflows/ci.yml/badge.svg)](https://github.com/langchain-ai/langchain-postgres/actions/workflows/ci.yml)
5
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
6
+ [![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai)
7
+ [![](https://dcbadge.vercel.app/api/server/6adMQxSpJS?compact=true&style=flat)](https://discord.gg/6adMQxSpJS)
8
+ [![Open Issues](https://img.shields.io/github/issues-raw/langchain-ai/langchain-postgres)](https://github.com/langchain-ai/langchain-postgres/issues)
9
+
10
+ The `langchain-postgres` package implementations of core LangChain abstractions using `Postgres`.
11
+
12
+ The package is released under the MIT license.
13
+
14
+ Feel free to use the abstraction as provided or else modify them / extend them as appropriate for your own application.
15
+
16
+ ## Requirements
17
+
18
+ The package supports the [asyncpg](https://github.com/MagicStack/asyncpg) and [psycogp3](https://www.psycopg.org/psycopg3/) drivers.
19
+
20
+ ## Installation
21
+
22
+ ```bash
23
+ pip install -U langchain-postgres
24
+ ```
25
+
26
+ ## Usage
27
+
28
+ ### Vectorstore
29
+
30
+ > [!WARNING]
31
+ > In v0.0.14+, `PGVector` is deprecated. Please migrate to `PGVectorStore`
32
+ > Version 0.0.14+ has not been released yet, but you can test version of the vectorstore on the main branch. Until official release do not use in production.
33
+ > for improved performance and manageability.
34
+ > See the [migration guide](https://github.com/langchain-ai/langchain-postgres/blob/main/examples/migrate_pgvector_to_pgvectorstore.md) for details on how to migrate from `PGVector` to `PGVectorStore`.
35
+
36
+ For a detailed example on `PGVectorStore` see [here](https://github.com/langchain-ai/langchain-postgres/blob/main/examples/pg_vectorstore.ipynb).
37
+
38
+ ```python
39
+ from langchain_core.documents import Document
40
+ from langchain_core.embeddings import DeterministicFakeEmbedding
41
+ from langchain_postgres import PGEngine, PGVectorStore
42
+
43
+ # Replace the connection string with your own Postgres connection string
44
+ CONNECTION_STRING = "postgresql+psycopg3://langchain:langchain@localhost:6024/langchain"
45
+ engine = PGEngine.from_connection_string(url=CONNECTION_STRING)
46
+
47
+ # Replace the vector size with your own vector size
48
+ VECTOR_SIZE = 768
49
+ embedding = DeterministicFakeEmbedding(size=VECTOR_SIZE)
50
+
51
+ TABLE_NAME = "my_doc_collection"
52
+
53
+ engine.init_vectorstore_table(
54
+ table_name=TABLE_NAME,
55
+ vector_size=VECTOR_SIZE,
56
+ )
57
+
58
+ store = PGVectorStore.create_sync(
59
+ engine=engine,
60
+ table_name=TABLE_NAME,
61
+ embedding_service=embedding,
62
+ )
63
+
64
+ docs = [
65
+ Document(page_content="Apples and oranges"),
66
+ Document(page_content="Cars and airplanes"),
67
+ Document(page_content="Train")
68
+ ]
69
+
70
+ store.add_documents(docs)
71
+
72
+ query = "I'd like a fruit."
73
+ docs = store.similarity_search(query)
74
+ print(docs)
75
+ ```
76
+
77
+ > [!TIP]
78
+ > All synchronous functions have corresponding asynchronous functions
79
+
80
+ ### ChatMessageHistory
81
+
82
+ The chat message history abstraction helps to persist chat message history
83
+ in a postgres table.
84
+
85
+ PostgresChatMessageHistory is parameterized using a `table_name` and a `session_id`.
86
+
87
+ The `table_name` is the name of the table in the database where
88
+ the chat messages will be stored.
89
+
90
+ The `session_id` is a unique identifier for the chat session. It can be assigned
91
+ by the caller using `uuid.uuid4()`.
92
+
93
+ ```python
94
+ import uuid
95
+
96
+ from langchain_core.messages import SystemMessage, AIMessage, HumanMessage
97
+ from langchain_postgres import PostgresChatMessageHistory
98
+ import psycopg
99
+
100
+ # Establish a synchronous connection to the database
101
+ # (or use psycopg.AsyncConnection for async)
102
+ conn_info = ... # Fill in with your connection info
103
+ sync_connection = psycopg.connect(conn_info)
104
+
105
+ # Create the table schema (only needs to be done once)
106
+ table_name = "chat_history"
107
+ PostgresChatMessageHistory.create_tables(sync_connection, table_name)
108
+
109
+ session_id = str(uuid.uuid4())
110
+
111
+ # Initialize the chat history manager
112
+ chat_history = PostgresChatMessageHistory(
113
+ table_name,
114
+ session_id,
115
+ sync_connection=sync_connection
116
+ )
117
+
118
+ # Add messages to the chat history
119
+ chat_history.add_messages([
120
+ SystemMessage(content="Meow"),
121
+ AIMessage(content="woof"),
122
+ HumanMessage(content="bark"),
123
+ ])
124
+
125
+ print(chat_history.messages)
126
+ ```
127
+
128
+ ## Google Cloud Integrations
129
+
130
+ [Google Cloud](https://python.langchain.com/docs/integrations/providers/google/) provides Vector Store, Chat Message History, and Data Loader integrations for [AlloyDB](https://cloud.google.com/alloydb) and [Cloud SQL](https://cloud.google.com/sql) for PostgreSQL databases via the following PyPi packages:
131
+
132
+ * [`langchain-google-alloydb-pg`](https://github.com/googleapis/langchain-google-alloydb-pg-python)
133
+
134
+ * [`langchain-google-cloud-sql-pg`](https://github.com/googleapis/langchain-google-cloud-sql-pg-python)
135
+
136
+ Using the Google Cloud integrations provides the following benefits:
137
+
138
+ - **Enhanced Security**: Securely connect to Google Cloud databases utilizing IAM for authorization and database authentication without needing to manage SSL certificates, configure firewall rules, or enable authorized networks.
139
+ - **Simplified and Secure Connections:** Connect to Google Cloud databases effortlessly using the instance name instead of complex connection strings. The integrations creates a secure connection pool that can be easily shared across your application using the `engine` object.
140
+
141
+ | Vector Store | Metadata filtering | Async support | Schema Flexibility | Improved metadata handling | Hybrid Search |
142
+ |--------------------------|--------------------|----------------|--------------------|----------------------------|---------------|
143
+ | Google AlloyDB | ✓ | ✓ | ✓ | ✓ | ✗ |
144
+ | Google Cloud SQL Postgres| ✓ | ✓ | ✓ | ✓ | ✗ |
145
+
@@ -2,6 +2,8 @@ from importlib import metadata
2
2
 
3
3
  from langchain_postgres.chat_message_histories import PostgresChatMessageHistory
4
4
  from langchain_postgres.translator import PGVectorTranslator
5
+ from langchain_postgres.v2.engine import Column, ColumnDict, PGEngine
6
+ from langchain_postgres.v2.vectorstores import PGVectorStore
5
7
  from langchain_postgres.vectorstores import PGVector
6
8
 
7
9
  try:
@@ -12,7 +14,11 @@ except metadata.PackageNotFoundError:
12
14
 
13
15
  __all__ = [
14
16
  "__version__",
17
+ "Column",
18
+ "ColumnDict",
19
+ "PGEngine",
15
20
  "PostgresChatMessageHistory",
16
21
  "PGVector",
22
+ "PGVectorStore",
17
23
  "PGVectorTranslator",
18
24
  ]
@@ -340,11 +340,17 @@ class PostgresChatMessageHistory(BaseChatMessageHistory):
340
340
  messages = messages_from_dict(items)
341
341
  return messages
342
342
 
343
- @property # type: ignore[override]
343
+ @property
344
344
  def messages(self) -> List[BaseMessage]:
345
345
  """The abstraction required a property."""
346
346
  return self.get_messages()
347
347
 
348
+ @messages.setter
349
+ def messages(self, value: list[BaseMessage]) -> None:
350
+ """Clear the stored messages and appends a list of messages."""
351
+ self.clear()
352
+ self.add_messages(value)
353
+
348
354
  def clear(self) -> None:
349
355
  """Clear the chat message history for the GIVEN session."""
350
356
  if self._connection is None:
@@ -0,0 +1,321 @@
1
+ import asyncio
2
+ import json
3
+ import warnings
4
+ from typing import Any, AsyncIterator, Iterator, Optional, Sequence, TypeVar
5
+
6
+ from sqlalchemy import RowMapping, text
7
+ from sqlalchemy.exc import ProgrammingError, SQLAlchemyError
8
+
9
+ from ..v2.engine import PGEngine
10
+ from ..v2.vectorstores import PGVectorStore
11
+
12
+ COLLECTIONS_TABLE = "langchain_pg_collection"
13
+ EMBEDDINGS_TABLE = "langchain_pg_embedding"
14
+
15
+ T = TypeVar("T")
16
+
17
+
18
+ async def __aget_collection_uuid(
19
+ engine: PGEngine,
20
+ collection_name: str,
21
+ ) -> str:
22
+ """
23
+ Get the collection uuid for a collection present in PGVector tables.
24
+
25
+ Args:
26
+ engine (PGEngine): The PG engine corresponding to the Database.
27
+ collection_name (str): The name of the collection to get the uuid for.
28
+ Returns:
29
+ The uuid corresponding to the collection.
30
+ """
31
+ query = f"SELECT name, uuid FROM {COLLECTIONS_TABLE} WHERE name = :collection_name"
32
+ async with engine._pool.connect() as conn:
33
+ result = await conn.execute(
34
+ text(query), parameters={"collection_name": collection_name}
35
+ )
36
+ result_map = result.mappings()
37
+ result_fetch = result_map.fetchone()
38
+ if result_fetch is None:
39
+ raise ValueError(f"Collection, {collection_name} not found.")
40
+ return result_fetch.uuid
41
+
42
+
43
+ async def __aextract_pgvector_collection(
44
+ engine: PGEngine,
45
+ collection_name: str,
46
+ batch_size: int = 1000,
47
+ ) -> AsyncIterator[Sequence[RowMapping]]:
48
+ """
49
+ Extract all data belonging to a PGVector collection.
50
+
51
+ Args:
52
+ engine (PGEngine): The PG engine corresponding to the Database.
53
+ collection_name (str): The name of the collection to get the data for.
54
+ batch_size (int): The batch size for collection extraction.
55
+ Default: 1000. Optional.
56
+
57
+ Yields:
58
+ The data present in the collection.
59
+ """
60
+ try:
61
+ uuid_task = asyncio.create_task(__aget_collection_uuid(engine, collection_name))
62
+ query = f"SELECT * FROM {EMBEDDINGS_TABLE} WHERE collection_id = :id"
63
+ async with engine._pool.connect() as conn:
64
+ uuid = await uuid_task
65
+ result_proxy = await conn.execute(text(query), parameters={"id": uuid})
66
+ while True:
67
+ rows = result_proxy.fetchmany(size=batch_size)
68
+ if not rows:
69
+ break
70
+ yield [row._mapping for row in rows]
71
+ except ValueError:
72
+ raise ValueError(f"Collection, {collection_name} does not exist.")
73
+ except SQLAlchemyError as e:
74
+ raise ProgrammingError(
75
+ statement=f"Failed to extract data from collection '{collection_name}': {e}",
76
+ params={"id": uuid},
77
+ orig=e,
78
+ ) from e
79
+
80
+
81
+ async def __concurrent_batch_insert(
82
+ data_batches: AsyncIterator[Sequence[RowMapping]],
83
+ vector_store: PGVectorStore,
84
+ max_concurrency: int = 100,
85
+ ) -> None:
86
+ pending: set[Any] = set()
87
+ async for batch_data in data_batches:
88
+ pending.add(
89
+ asyncio.ensure_future(
90
+ vector_store.aadd_embeddings(
91
+ texts=[data.document for data in batch_data],
92
+ embeddings=[json.loads(data.embedding) for data in batch_data],
93
+ metadatas=[data.cmetadata for data in batch_data],
94
+ ids=[data.id for data in batch_data],
95
+ )
96
+ )
97
+ )
98
+ if len(pending) >= max_concurrency:
99
+ _, pending = await asyncio.wait(
100
+ pending, return_when=asyncio.FIRST_COMPLETED
101
+ )
102
+ if pending:
103
+ await asyncio.wait(pending)
104
+
105
+
106
+ async def __amigrate_pgvector_collection(
107
+ engine: PGEngine,
108
+ collection_name: str,
109
+ vector_store: PGVectorStore,
110
+ delete_pg_collection: Optional[bool] = False,
111
+ insert_batch_size: int = 1000,
112
+ ) -> None:
113
+ """
114
+ Migrate all data present in a PGVector collection to use separate tables for each collection.
115
+ The new data format is compatible with the PGVectoreStore interface.
116
+
117
+ Args:
118
+ engine (PGEngine): The PG engine corresponding to the Database.
119
+ collection_name (str): The collection to migrate.
120
+ vector_store (PGVectorStore): The PGVectorStore object corresponding to the new collection table.
121
+ delete_pg_collection (bool): An option to delete the original data upon migration.
122
+ Default: False. Optional.
123
+ insert_batch_size (int): Number of rows to insert at once in the table.
124
+ Default: 1000.
125
+ """
126
+ destination_table = vector_store.get_table_name()
127
+
128
+ # Get row count in PGVector collection
129
+ uuid_task = asyncio.create_task(__aget_collection_uuid(engine, collection_name))
130
+ query = (
131
+ f"SELECT COUNT(*) FROM {EMBEDDINGS_TABLE} WHERE collection_id=:collection_id"
132
+ )
133
+ async with engine._pool.connect() as conn:
134
+ uuid = await uuid_task
135
+ result = await conn.execute(text(query), parameters={"collection_id": uuid})
136
+ result_map = result.mappings()
137
+ collection_data_len = result_map.fetchone()
138
+ if collection_data_len is None:
139
+ warnings.warn(f"Collection, {collection_name} contains no elements.")
140
+ return
141
+
142
+ # Extract data from the collection and batch insert into the new table
143
+ data_batches = __aextract_pgvector_collection(
144
+ engine, collection_name, batch_size=insert_batch_size
145
+ )
146
+ await __concurrent_batch_insert(data_batches, vector_store, max_concurrency=100)
147
+
148
+ # Validate data migration
149
+ query = f"SELECT COUNT(*) FROM {destination_table}"
150
+ async with engine._pool.connect() as conn:
151
+ result = await conn.execute(text(query))
152
+ result_map = result.mappings()
153
+ table_size = result_map.fetchone()
154
+ if not table_size:
155
+ raise ValueError(f"Table: {destination_table} does not exist.")
156
+
157
+ if collection_data_len["count"] != table_size["count"]:
158
+ raise ValueError(
159
+ "All data not yet migrated.\n"
160
+ f"Original row count: {collection_data_len['count']}\n"
161
+ f"Collection table, {destination_table} row count: {table_size['count']}"
162
+ )
163
+ elif delete_pg_collection:
164
+ # Delete PGVector data
165
+ query = f"DELETE FROM {EMBEDDINGS_TABLE} WHERE collection_id=:collection_id"
166
+ async with engine._pool.connect() as conn:
167
+ await conn.execute(text(query), parameters={"collection_id": uuid})
168
+ await conn.commit()
169
+
170
+ query = f"DELETE FROM {COLLECTIONS_TABLE} WHERE name=:collection_name"
171
+ async with engine._pool.connect() as conn:
172
+ await conn.execute(
173
+ text(query), parameters={"collection_name": collection_name}
174
+ )
175
+ await conn.commit()
176
+ print(f"Successfully deleted PGVector collection, {collection_name}")
177
+
178
+
179
+ async def __alist_pgvector_collection_names(
180
+ engine: PGEngine,
181
+ ) -> list[str]:
182
+ """Lists all collection names present in PGVector table."""
183
+ try:
184
+ query = f"SELECT name from {COLLECTIONS_TABLE}"
185
+ async with engine._pool.connect() as conn:
186
+ result = await conn.execute(text(query))
187
+ result_map = result.mappings()
188
+ all_rows = result_map.fetchall()
189
+ return [row["name"] for row in all_rows]
190
+ except ProgrammingError as e:
191
+ raise ValueError(
192
+ "Please provide the correct collection table name: " + str(e)
193
+ ) from e
194
+
195
+
196
+ async def aextract_pgvector_collection(
197
+ engine: PGEngine,
198
+ collection_name: str,
199
+ batch_size: int = 1000,
200
+ ) -> AsyncIterator[Sequence[RowMapping]]:
201
+ """
202
+ Extract all data belonging to a PGVector collection.
203
+
204
+ Args:
205
+ engine (PGEngine): The PG engine corresponding to the Database.
206
+ collection_name (str): The name of the collection to get the data for.
207
+ batch_size (int): The batch size for collection extraction.
208
+ Default: 1000. Optional.
209
+
210
+ Yields:
211
+ The data present in the collection.
212
+ """
213
+ iterator = __aextract_pgvector_collection(engine, collection_name, batch_size)
214
+ while True:
215
+ try:
216
+ result = await engine._run_as_async(iterator.__anext__())
217
+ yield result
218
+ except StopAsyncIteration:
219
+ break
220
+
221
+
222
+ async def alist_pgvector_collection_names(
223
+ engine: PGEngine,
224
+ ) -> list[str]:
225
+ """Lists all collection names present in PGVector table."""
226
+ return await engine._run_as_async(__alist_pgvector_collection_names(engine))
227
+
228
+
229
+ async def amigrate_pgvector_collection(
230
+ engine: PGEngine,
231
+ collection_name: str,
232
+ vector_store: PGVectorStore,
233
+ delete_pg_collection: Optional[bool] = False,
234
+ insert_batch_size: int = 1000,
235
+ ) -> None:
236
+ """
237
+ Migrate all data present in a PGVector collection to use separate tables for each collection.
238
+ The new data format is compatible with the PGVectorStore interface.
239
+
240
+ Args:
241
+ engine (PGEngine): The PG engine corresponding to the Database.
242
+ collection_name (str): The collection to migrate.
243
+ vector_store (PGVectorStore): The PGVectorStore object corresponding to the new collection table.
244
+ use_json_metadata (bool): An option to keep the PGVector metadata as json in the new table.
245
+ Default: False. Optional.
246
+ delete_pg_collection (bool): An option to delete the original data upon migration.
247
+ Default: False. Optional.
248
+ insert_batch_size (int): Number of rows to insert at once in the table.
249
+ Default: 1000.
250
+ """
251
+ await engine._run_as_async(
252
+ __amigrate_pgvector_collection(
253
+ engine,
254
+ collection_name,
255
+ vector_store,
256
+ delete_pg_collection,
257
+ insert_batch_size,
258
+ )
259
+ )
260
+
261
+
262
+ def extract_pgvector_collection(
263
+ engine: PGEngine,
264
+ collection_name: str,
265
+ batch_size: int = 1000,
266
+ ) -> Iterator[Sequence[RowMapping]]:
267
+ """
268
+ Extract all data belonging to a PGVector collection.
269
+
270
+ Args:
271
+ engine (PGEngine): The PG engine corresponding to the Database.
272
+ collection_name (str): The name of the collection to get the data for.
273
+ batch_size (int): The batch size for collection extraction.
274
+ Default: 1000. Optional.
275
+
276
+ Yields:
277
+ The data present in the collection.
278
+ """
279
+ iterator = __aextract_pgvector_collection(engine, collection_name, batch_size)
280
+ while True:
281
+ try:
282
+ result = engine._run_as_sync(iterator.__anext__())
283
+ yield result
284
+ except StopAsyncIteration:
285
+ break
286
+
287
+
288
+ def list_pgvector_collection_names(engine: PGEngine) -> list[str]:
289
+ """Lists all collection names present in PGVector table."""
290
+ return engine._run_as_sync(__alist_pgvector_collection_names(engine))
291
+
292
+
293
+ def migrate_pgvector_collection(
294
+ engine: PGEngine,
295
+ collection_name: str,
296
+ vector_store: PGVectorStore,
297
+ delete_pg_collection: Optional[bool] = False,
298
+ insert_batch_size: int = 1000,
299
+ ) -> None:
300
+ """
301
+ Migrate all data present in a PGVector collection to use separate tables for each collection.
302
+ The new data format is compatible with the PGVectorStore interface.
303
+
304
+ Args:
305
+ engine (PGEngine): The PG engine corresponding to the Database.
306
+ collection_name (str): The collection to migrate.
307
+ vector_store (PGVectorStore): The PGVectorStore object corresponding to the new collection table.
308
+ delete_pg_collection (bool): An option to delete the original data upon migration.
309
+ Default: False. Optional.
310
+ insert_batch_size (int): Number of rows to insert at once in the table.
311
+ Default: 1000.
312
+ """
313
+ engine._run_as_sync(
314
+ __amigrate_pgvector_collection(
315
+ engine,
316
+ collection_name,
317
+ vector_store,
318
+ delete_pg_collection,
319
+ insert_batch_size,
320
+ )
321
+ )