llm-ie 0.4.6__py3-none-any.whl → 1.0.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,1215 +0,0 @@
1
- Metadata-Version: 2.1
2
- Name: llm-ie
3
- Version: 0.4.6
4
- Summary: An LLM-powered tool that transforms everyday language into robust information extraction pipelines.
5
- License: MIT
6
- Author: Enshuo (David) Hsu
7
- Requires-Python: >=3.11,<4.0
8
- Classifier: License :: OSI Approved :: MIT License
9
- Classifier: Programming Language :: Python :: 3
10
- Classifier: Programming Language :: Python :: 3.11
11
- Classifier: Programming Language :: Python :: 3.12
12
- Requires-Dist: colorama (>=0.4.6,<0.5.0)
13
- Requires-Dist: json_repair (>=0.30,<0.31)
14
- Requires-Dist: nest_asyncio (>=1.6.0,<2.0.0)
15
- Requires-Dist: nltk (>=3.8,<4.0)
16
- Description-Content-Type: text/markdown
17
-
18
- <div align="center"><img src=doc_asset/readme_img/LLM-IE.png width=500 ></div>
19
-
20
- ![Python Version](https://img.shields.io/pypi/pyversions/llm-ie)
21
- ![PyPI](https://img.shields.io/pypi/v/llm-ie)
22
-
23
-
24
- An LLM-powered tool that transforms everyday language into robust information extraction pipelines.
25
-
26
- | Features | Support |
27
- |----------|----------|
28
- | **LLM Agent for prompt writing** | :white_check_mark: Interactive chat, Python functions |
29
- | **Named Entity Recognition (NER)** | :white_check_mark: Document-level, Sentence-level |
30
- | **Entity Attributes Extraction** | :white_check_mark: Flexible formats |
31
- | **Relation Extraction (RE)** | :white_check_mark: Binary & Multiclass relations |
32
- | **Visualization** | :white_check_mark: Built-in entity & relation visualization |
33
-
34
- ## Recent Updates
35
- - [v0.3.0](https://github.com/daviden1013/llm-ie/releases/tag/v0.3.0) (Oct 17, 2024): Interactive chat to Prompt editor LLM agent.
36
- - [v0.3.1](https://github.com/daviden1013/llm-ie/releases/tag/v0.3.1) (Oct 26, 2024): Added Sentence Review Frame Extractor and Sentence CoT Frame Extractor
37
- - [v0.3.4](https://github.com/daviden1013/llm-ie/releases/tag/v0.3.4) (Nov 24, 2024): Added entity fuzzy search.
38
- - [v0.3.5](https://github.com/daviden1013/llm-ie/releases/tag/v0.3.5) (Nov 27, 2024): Adopted `json_repair` to fix broken JSON from LLM outputs.
39
- - [v0.4.0](https://github.com/daviden1013/llm-ie/releases/tag/v0.4.0) (Jan 4, 2025):
40
- - Concurrent LLM inferencing to speed up frame and relation extraction.
41
- - Support for LiteLLM.
42
- - [v0.4.1](https://github.com/daviden1013/llm-ie/releases/tag/v0.4.1) (Jan 25, 2025): Added filters, table view, and some new features to visualization tool (make sure to update [ie-viz](https://github.com/daviden1013/ie-viz)).
43
- - [v0.4.3](https://github.com/daviden1013/llm-ie/releases/tag/v0.4.3) (Feb 7, 2025): Added Azure OpenAI support.
44
- - [v0.4.5](https://github.com/daviden1013/llm-ie/releases/tag/v0.4.5) (Feb 16, 2025):
45
- - Added option to adjust number of context sentences in sentence-based extractors.
46
- - Added support for OpenAI reasoning models ("o" series).
47
- - [v0.4.6](https://github.com/daviden1013/llm-ie/releases/tag/v0.4.6) (Mar 1, 2025): Allow LLM to output overlapping frames.
48
-
49
- ## Table of Contents
50
- - [Overview](#overview)
51
- - [Prerequisite](#prerequisite)
52
- - [Installation](#installation)
53
- - [Quick Start](#quick-start)
54
- - [Examples](#examples)
55
- - [User Guide](#user-guide)
56
- - [LLM Inference Engine](#llm-inference-engine)
57
- - [Prompt Template](#prompt-template)
58
- - [Prompt Editor LLM Agent](#prompt-editor-llm-agent)
59
- - [Extractor](#extractor)
60
- - [FrameExtractor](#frameextractor)
61
- - [RelationExtractor](#relationextractor)
62
- - [Visualization](#visualization)
63
- - [Benchmarks](#benchmarks)
64
- - [Citation](#citation)
65
-
66
- ## Overview
67
- LLM-IE is a toolkit that provides robust information extraction utilities for named entity, entity attributes, and entity relation extraction. Since prompt design has a significant impact on generative information extraction with LLMs, it has a built-in LLM agent ("editor") to help with prompt writing. The flowchart below demonstrates the workflow starting from a casual language request to output visualization.
68
-
69
- <div align="center"><img src="doc_asset/readme_img/LLM-IE flowchart.png" width=800 ></div>
70
-
71
- ## Prerequisite
72
- At least one LLM inference engine is required. There are built-in supports for 🚅 [LiteLLM](https://github.com/BerriAI/litellm), 🦙 [Llama-cpp-python](https://github.com/abetlen/llama-cpp-python), <img src="doc_asset/readme_img/ollama_icon.png" alt="Icon" width="22"/> [Ollama](https://github.com/ollama/ollama), 🤗 [Huggingface_hub](https://github.com/huggingface/huggingface_hub), <img src=doc_asset/readme_img/openai-logomark_white.png width=16 /> [OpenAI API](https://platform.openai.com/docs/api-reference/introduction), and <img src=doc_asset/readme_img/vllm-logo_small.png width=20 /> [vLLM](https://github.com/vllm-project/vllm). For installation guides, please refer to those projects. Other inference engines can be configured through the [InferenceEngine](src/llm_ie/engines.py) abstract class. See [LLM Inference Engine](#llm-inference-engine) section below.
73
-
74
- ## Installation
75
- The Python package is available on PyPI.
76
- ```
77
- pip install llm-ie
78
- ```
79
- Note that this package does not check LLM inference engine installation nor install them. See [prerequisite](#prerequisite) section for details.
80
-
81
- ## Quick Start
82
- We use a [synthesized medical note](demo/document/synthesized_note.txt) by ChatGPT to demo the information extraction process. Our task is to extract diagnosis names, spans, and corresponding attributes (i.e., diagnosis datetime, status).
83
-
84
- #### Choose an LLM inference engine
85
- Choose one of the built-in engines below.
86
-
87
- <details>
88
- <summary>🚅 LiteLLM</summary>
89
-
90
- ```python
91
- from llm_ie.engines import LiteLLMInferenceEngine
92
-
93
- inference_engine = LiteLLMInferenceEngine(model="openai/Llama-3.3-70B-Instruct", base_url="http://localhost:8000/v1", api_key="EMPTY")
94
- ```
95
- </details>
96
-
97
- <details>
98
- <summary><img src=doc_asset/readme_img/openai-logomark_white.png width=16 /> OpenAI API</summary>
99
-
100
- Follow the [Best Practices for API Key Safety](https://help.openai.com/en/articles/5112595-best-practices-for-api-key-safety) to set up API key.
101
- ```python
102
- from llm_ie.engines import OpenAIInferenceEngine
103
-
104
- inference_engine = OpenAIInferenceEngine(model="gpt-4o-mini")
105
- ```
106
- </details>
107
-
108
- <details>
109
- <summary><img src=doc_asset/readme_img/Azure_icon.png width=32 /> Azure OpenAI API</summary>
110
-
111
- Follow the [Azure AI Services Quickstart](https://learn.microsoft.com/en-us/azure/ai-services/openai/quickstart?tabs=command-line%2Ckeyless%2Ctypescript-keyless%2Cpython-new&pivots=programming-language-python) to set up Endpoint and API key.
112
-
113
- ```python
114
- from llm_ie.engines import AzureOpenAIInferenceEngine
115
-
116
- inference_engine = AzureOpenAIInferenceEngine(model="gpt-4o-mini",
117
- api_version="<your api version>")
118
- ```
119
-
120
- </details>
121
-
122
- <details>
123
- <summary>🤗 Huggingface_hub</summary>
124
-
125
- ```python
126
- from llm_ie.engines import HuggingFaceHubInferenceEngine
127
-
128
- inference_engine = HuggingFaceHubInferenceEngine(model="meta-llama/Meta-Llama-3-8B-Instruct")
129
- ```
130
- </details>
131
-
132
- <details>
133
- <summary><img src="doc_asset/readme_img/ollama_icon.png" alt="Icon" width="22"/> Ollama</summary>
134
-
135
- ```python
136
- from llm_ie.engines import OllamaInferenceEngine
137
-
138
- inference_engine = OllamaInferenceEngine(model_name="llama3.1:8b-instruct-q8_0")
139
- ```
140
- </details>
141
-
142
- <details>
143
- <summary><img src=doc_asset/readme_img/vllm-logo_small.png width=20 /> vLLM</summary>
144
-
145
- The vLLM support follows the [OpenAI Compatible Server](https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html). For more parameters, please refer to the documentation.
146
-
147
- Start the server
148
- ```cmd
149
- vllm serve meta-llama/Meta-Llama-3.1-8B-Instruct
150
- ```
151
- Define inference engine
152
- ```python
153
- from llm_ie.engines import OpenAIInferenceEngine
154
- inference_engine = OpenAIInferenceEngine(base_url="http://localhost:8000/v1",
155
- api_key="EMPTY",
156
- model="meta-llama/Meta-Llama-3.1-8B-Instruct")
157
- ```
158
- </details>
159
-
160
- <details>
161
- <summary>🦙 Llama-cpp-python</summary>
162
-
163
- ```python
164
- from llm_ie.engines import LlamaCppInferenceEngine
165
-
166
- inference_engine = LlamaCppInferenceEngine(repo_id="bullerwins/Meta-Llama-3.1-8B-Instruct-GGUF",
167
- gguf_filename="Meta-Llama-3.1-8B-Instruct-Q8_0.gguf")
168
- ```
169
- </details>
170
-
171
- In this quick start demo, we use Ollama to run Llama-3.1-8B with int8 quantization.
172
- The outputs might be slightly different with other inference engines, LLMs, or quantization.
173
-
174
- #### Casual language as prompt
175
- We start with a casual description:
176
-
177
- *"Extract diagnosis from the clinical note. Make sure to include diagnosis date and status."*
178
-
179
- Define the AI prompt editor.
180
- ```python
181
- from llm_ie import OllamaInferenceEngine, PromptEditor, SentenceFrameExtractor
182
-
183
- # Define a LLM inference engine
184
- inference_engine = OllamaInferenceEngine(model_name="llama3.1:8b-instruct-q8_0")
185
- # Define LLM prompt editor
186
- editor = PromptEditor(inference_engine, SentenceFrameExtractor)
187
- # Start chat
188
- editor.chat()
189
- ```
190
-
191
- This opens an interactive session:
192
- <div align="left"><img src=doc_asset/readme_img/terminal_chat.PNG width=1000 ></div>
193
-
194
-
195
- The ```PromptEditor``` drafts a prompt template following the schema required by the ```SentenceFrameExtractor```:
196
-
197
- ```
198
- # Task description
199
- The paragraph below contains a clinical note with diagnoses listed. Please carefully review it and extract the diagnoses, including the diagnosis date and status.
200
-
201
- # Schema definition
202
- Your output should contain:
203
- "Diagnosis" which is the name of the diagnosis,
204
- "Date" which is the date when the diagnosis was made,
205
- "Status" which is the current status of the diagnosis (e.g. active, resolved, etc.)
206
-
207
- # Output format definition
208
- Your output should follow JSON format, for example:
209
- [
210
- {"Diagnosis": "<Diagnosis text>", "Date": "<date in YYYY-MM-DD format>", "Status": "<status>"},
211
- {"Diagnosis": "<Diagnosis text>", "Date": "<date in YYYY-MM-DD format>", "Status": "<status>"}
212
- ]
213
-
214
- # Additional hints
215
- Your output should be 100% based on the provided content. DO NOT output fake information.
216
- If there is no specific date or status, just omit those keys.
217
-
218
- # Input placeholder
219
- Below is the clinical note:
220
- {{input}}
221
- ```
222
-
223
-
224
- #### Information extraction pipeline
225
- Now we apply the prompt template to build an information extraction pipeline.
226
-
227
- ```python
228
- # Load synthesized medical note
229
- with open("./demo/document/synthesized_note.txt", 'r') as f:
230
- note_text = f.read()
231
-
232
- # Define extractor
233
- extractor = SentenceFrameExtractor(inference_engine, prompt_template)
234
-
235
- # Extract
236
- # To stream the extraction process, use concurrent=False, stream=True:
237
- frames = extractor.extract_frames(note_text, entity_key="Diagnosis", concurrent=False, stream=True)
238
- # For faster extraction, use concurrent=True to enable asynchronous prompting
239
- frames = extractor.extract_frames(note_text, entity_key="Diagnosis", concurrent=True)
240
-
241
- # Check extractions
242
- for frame in frames:
243
- print(frame.to_dict())
244
- ```
245
- The output is a list of frames. Each frame has a ```entity_text```, ```start```, ```end```, and a dictionary of ```attr```.
246
-
247
- ```python
248
- {'frame_id': '0', 'start': 537, 'end': 549, 'entity_text': 'hypertension', 'attr': {'Date': '2010-01-01', 'Status': 'Active'}}
249
- {'frame_id': '1', 'start': 551, 'end': 565, 'entity_text': 'hyperlipidemia', 'attr': {'Date': '2015-01-01', 'Status': 'Active'}}
250
- {'frame_id': '2', 'start': 571, 'end': 595, 'entity_text': 'Type 2 diabetes mellitus', 'attr': {'Date': '2018-01-01', 'Status': 'Active'}}
251
- {'frame_id': '3', 'start': 660, 'end': 670, 'entity_text': 'chest pain', 'attr': {'Date': 'July 18, 2024'}}
252
- {'frame_id': '4', 'start': 991, 'end': 1003, 'entity_text': 'Hypertension', 'attr': {'Date': '2010-01-01'}}
253
- {'frame_id': '5', 'start': 1026, 'end': 1040, 'entity_text': 'Hyperlipidemia', 'attr': {'Date': '2015-01-01'}}
254
- {'frame_id': '6', 'start': 1063, 'end': 1087, 'entity_text': 'Type 2 Diabetes Mellitus', 'attr': {'Date': '2018-01-01'}}
255
- {'frame_id': '7', 'start': 1926, 'end': 1947, 'entity_text': 'ST-segment depression', 'attr': None}
256
- {'frame_id': '8', 'start': 2049, 'end': 2066, 'entity_text': 'acute infiltrates', 'attr': None}
257
- {'frame_id': '9', 'start': 2117, 'end': 2150, 'entity_text': 'Mild left ventricular hypertrophy', 'attr': None}
258
- {'frame_id': '10', 'start': 2402, 'end': 2425, 'entity_text': 'acute coronary syndrome', 'attr': {'Date': 'July 20, 2024', 'Status': 'Active'}}
259
- ```
260
-
261
- We can save the frames to a document object for better management. The document holds ```text``` and ```frames```. The ```add_frame()``` method performs validation and (if passed) adds a frame to the document.
262
- The ```valid_mode``` controls how frame validation should be performed. For example, the ```valid_mode = "span"``` will prevent new frames from being added if the frame spans (```start```, ```end```) has already exist. The ```create_id = True``` allows the document to assign unique frame IDs.
263
-
264
- ```python
265
- from llm_ie.data_types import LLMInformationExtractionDocument
266
-
267
- # Define document
268
- doc = LLMInformationExtractionDocument(doc_id="Synthesized medical note",
269
- text=note_text)
270
- # Add frames to a document
271
- doc.add_frames(frames, create_id=True)
272
-
273
- # Save document to file (.llmie)
274
- doc.save("<your filename>.llmie")
275
- ```
276
-
277
- To visualize the extracted frames, we use the ```viz_serve()``` method.
278
- ```python
279
- doc.viz_serve()
280
- ```
281
- A Flask App starts at port 5000 (default).
282
- ```
283
- * Serving Flask app 'ie_viz.utilities'
284
- * Debug mode: off
285
- WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
286
- * Running on all addresses (0.0.0.0)
287
- * Running on http://127.0.0.1:5000
288
- Press CTRL+C to quit
289
- 127.0.0.1 - - [03/Oct/2024 23:36:22] "GET / HTTP/1.1" 200 -
290
- ```
291
-
292
- <div align="left"><img src="doc_asset/readme_img/llm-ie_demo.PNG" width=1000 ></div>
293
-
294
-
295
- ## Examples
296
- - [Interactive chat with LLM prompt editors](demo/prompt_template_writing_via_chat.ipynb)
297
- - [Write prompt templates with LLM prompt editors](demo/prompt_template_writing.ipynb)
298
- - [NER + RE for Drug, Strength, Frequency](demo/medication_relation_extraction.ipynb)
299
-
300
- ## User Guide
301
- This package is comprised of some key classes:
302
- - LLM Inference Engine
303
- - Prompt Template
304
- - Prompt Editor
305
- - Extractors
306
-
307
- ### LLM Inference Engine
308
- Provides an interface for different LLM inference engines to work in the information extraction workflow. The built-in engines are `LiteLLMInferenceEngine`, `OpenAIInferenceEngine`, `HuggingFaceHubInferenceEngine`, `OllamaInferenceEngine`, and `LlamaCppInferenceEngine`.
309
-
310
- #### 🚅 LiteLLM
311
- The LiteLLM is an adaptor project that unifies many proprietary and open-source LLM APIs. Popular inferncing servers, including OpenAI, Huggingface Hub, and Ollama are supported via its interface. For more details, refer to [LiteLLM GitHub page](https://github.com/BerriAI/litellm).
312
-
313
- To use LiteLLM with LLM-IE, import the `LiteLLMInferenceEngine` and follow the required model naming.
314
- ```python
315
- from llm_ie.engines import LiteLLMInferenceEngine
316
-
317
- # Huggingface serverless inferencing
318
- os.environ['HF_TOKEN']
319
- inference_engine = LiteLLMInferenceEngine(model="huggingface/meta-llama/Meta-Llama-3-8B-Instruct")
320
-
321
- # OpenAI GPT models
322
- os.environ['OPENAI_API_KEY']
323
- inference_engine = LiteLLMInferenceEngine(model="openai/gpt-4o-mini")
324
-
325
- # OpenAI compatible local server
326
- inference_engine = LiteLLMInferenceEngine(model="openai/Llama-3.1-8B-Instruct", base_url="http://localhost:8000/v1", api_key="EMPTY")
327
-
328
- # Ollama
329
- inference_engine = LiteLLMInferenceEngine(model="ollama/llama3.1:8b-instruct-q8_0")
330
- ```
331
-
332
- #### <img src=doc_asset/readme_img/openai-logomark_white.png width=16 /> OpenAI API
333
- In bash, save API key to the environmental variable ```OPENAI_API_KEY```.
334
- ```
335
- export OPENAI_API_KEY=<your_API_key>
336
- ```
337
-
338
- In Python, create inference engine and specify model name. For the available models, refer to [OpenAI webpage](https://platform.openai.com/docs/models).
339
- For more parameters, see [OpenAI API reference](https://platform.openai.com/docs/api-reference/introduction).
340
-
341
- ```python
342
- from llm_ie.engines import OpenAIInferenceEngine
343
-
344
- inference_engine = OpenAIInferenceEngine(model="gpt-4o-mini")
345
- ```
346
-
347
- For reasoning models ("o" series), use the `reasoning_model=True` flag. The `max_completion_tokens` will be used instead of the `max_tokens`. `temperature` will be ignored.
348
-
349
- ```python
350
- from llm_ie.engines import OpenAIInferenceEngine
351
-
352
- inference_engine = OpenAIInferenceEngine(model="o1-mini", reasoning_model=True)
353
- ```
354
-
355
- #### <img src=doc_asset/readme_img/Azure_icon.png width=32 /> Azure OpenAI API
356
- In bash, save the endpoint name and API key to environmental variables `AZURE_OPENAI_ENDPOINT` and `AZURE_OPENAI_API_KEY`.
357
- ```
358
- export AZURE_OPENAI_API_KEY="<your_API_key>"
359
- export AZURE_OPENAI_ENDPOINT="<your_endpoint>"
360
- ```
361
-
362
- In Python, create inference engine and specify model name. For the available models, refer to [OpenAI webpage](https://platform.openai.com/docs/models).
363
- For more parameters, see [Azure OpenAI reference](https://learn.microsoft.com/en-us/azure/ai-services/openai/quickstart).
364
-
365
- ```python
366
- from llm_ie.engines import AzureOpenAIInferenceEngine
367
-
368
- inference_engine = AzureOpenAIInferenceEngine(model="gpt-4o-mini")
369
- ```
370
-
371
- For reasoning models ("o" series), use the `reasoning_model=True` flag. The `max_completion_tokens` will be used instead of the `max_tokens`. `temperature` will be ignored.
372
-
373
- ```python
374
- from llm_ie.engines import AzureOpenAIInferenceEngine
375
-
376
- inference_engine = AzureOpenAIInferenceEngine(model="o1-mini", reasoning_model=True)
377
- ```
378
-
379
- #### 🤗 huggingface_hub
380
- The ```model``` can be a model id hosted on the Hugging Face Hub or a URL to a deployed Inference Endpoint. Refer to the [Inference Client](https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_client) documentation for more details.
381
-
382
- ```python
383
- from llm_ie.engines import HuggingFaceHubInferenceEngine
384
-
385
- inference_engine = HuggingFaceHubInferenceEngine(model="meta-llama/Meta-Llama-3-8B-Instruct")
386
- ```
387
-
388
- #### <img src="doc_asset/readme_img/ollama_icon.png" alt="Icon" width="22"/> Ollama
389
- The ```model_name``` must match the names on the [Ollama library](https://ollama.com/library). Use the command line ```ollama ls``` to check your local model list. ```num_ctx``` determines the context length LLM will consider during text generation. Empirically, longer context length gives better performance, while consuming more memory and increases computation. ```keep_alive``` regulates the lifespan of LLM. It indicates a number of seconds to hold the LLM after the last API call. Default is 5 minutes (300 sec).
390
-
391
- ```python
392
- from llm_ie.engines import OllamaInferenceEngine
393
-
394
- inference_engine = OllamaInferenceEngine(model_name="llama3.1:8b-instruct-q8_0", num_ctx=4096, keep_alive=300)
395
- ```
396
-
397
- #### <img src=doc_asset/readme_img/vllm-logo_small.png width=20 /> vLLM
398
- The vLLM support follows the [OpenAI Compatible Server](https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html). For more parameters, please refer to the documentation.
399
-
400
- Start the server
401
- ```cmd
402
- CUDA_VISIBLE_DEVICES=<GPU#> vllm serve meta-llama/Meta-Llama-3.1-8B-Instruct --api-key MY_API_KEY --tensor-parallel-size <# of GPUs to use>
403
- ```
404
- Use ```CUDA_VISIBLE_DEVICES``` to specify GPUs to use. The ```--tensor-parallel-size``` should be set accordingly. The ```--api-key``` is optional.
405
- the default port is 8000. ```--port``` sets the port.
406
-
407
- Define inference engine
408
- ```python
409
- from llm_ie.engines import OpenAIInferenceEngine
410
- inference_engine = OpenAIInferenceEngine(base_url="http://localhost:8000/v1",
411
- api_key="MY_API_KEY",
412
- model="meta-llama/Meta-Llama-3.1-8B-Instruct")
413
- ```
414
- The ```model``` must match the repo name specified in the server.
415
-
416
- #### 🦙 Llama-cpp-python
417
- The ```repo_id``` and ```gguf_filename``` must match the ones on the Huggingface repo to ensure the correct model is loaded. ```n_ctx``` determines the context length LLM will consider during text generation. Empirically, longer context length gives better performance, while consuming more memory and increases computation. Note that when ```n_ctx``` is less than the prompt length, Llama.cpp throws exceptions. ```n_gpu_layers``` indicates a number of model layers to offload to GPU. Default is -1 for all layers (entire LLM). Flash attention ```flash_attn``` is supported by Llama.cpp. The ```verbose``` indicates whether model information should be displayed. For more input parameters, see 🦙 [Llama-cpp-python](https://github.com/abetlen/llama-cpp-python).
418
-
419
- ```python
420
- from llm_ie.engines import LlamaCppInferenceEngine
421
-
422
- inference_engine = LlamaCppInferenceEngine(repo_id="bullerwins/Meta-Llama-3.1-8B-Instruct-GGUF",
423
- gguf_filename="Meta-Llama-3.1-8B-Instruct-Q8_0.gguf",
424
- n_ctx=4096,
425
- n_gpu_layers=-1,
426
- flash_attn=True,
427
- verbose=False)
428
- ```
429
-
430
- #### Test inference engine configuration
431
- To test the inference engine, use the ```chat()``` method.
432
-
433
- ```python
434
- from llm_ie.engines import OllamaInferenceEngine
435
-
436
- inference_engine = OllamaInferenceEngine(model_name="llama3.1:8b-instruct-q8_0")
437
- inference_engine.chat(messages=[{"role": "user", "content":"Hi"}], stream=True)
438
- ```
439
- The output should be something like (might vary by LLMs and versions)
440
-
441
- ```python
442
- 'How can I help you today?'
443
- ```
444
-
445
- #### Customize inference engine
446
- The abstract class ```InferenceEngine``` defines the interface and required method ```chat()```. Inherit this class for customized API.
447
- ```python
448
- class InferenceEngine:
449
- @abc.abstractmethod
450
- def __init__(self):
451
- """
452
- This is an abstract class to provide interfaces for LLM inference engines.
453
- Children classes that inherits this class can be used in extractors. Must implement chat() method.
454
- """
455
- return NotImplemented
456
-
457
- @abc.abstractmethod
458
- def chat(self, messages:List[Dict[str,str]], max_new_tokens:int=2048, temperature:float=0.0, stream:bool=False, **kwrs) -> str:
459
- """
460
- This method inputs chat messages and outputs LLM generated text.
461
-
462
- Parameters:
463
- ----------
464
- messages : List[Dict[str,str]]
465
- a list of dict with role and content. role must be one of {"system", "user", "assistant"}
466
- max_new_tokens : str, Optional
467
- the max number of new tokens LLM can generate.
468
- temperature : float, Optional
469
- the temperature for token sampling.
470
- stream : bool, Optional
471
- if True, LLM generated text will be printed in terminal in real-time.
472
- """
473
- return NotImplemented
474
- ```
475
-
476
- ### Prompt Template
477
- A prompt template is a string with one or many placeholders ```{{<placeholder_name>}}```. When input to an extractor, the ```text_content``` will be inserted into the placeholders to construct a prompt. Below is a demo:
478
-
479
- ```python
480
- prompt_template = """
481
- Below is a medical note. Your task is to extract diagnosis information.
482
- Your output should include:
483
- "Diagnosis": extract diagnosis names,
484
- "Datetime": date/ time of diagnosis,
485
- "Status": status of present, history, or family history
486
-
487
- Your output should follow a JSON format:
488
- [
489
- {"Diagnosis": <exact words as in the document>, "Datetime": <diagnosis datetime>, "Status": <one of "present", "history">},
490
- {"Diagnosis": <exact words as in the document>, "Datetime": <diagnosis datetime>, "Status": <one of "present", "history">},
491
- ...
492
- ]
493
-
494
- Below is the medical note:
495
- "{{input}}"
496
- """
497
- # Define a inference engine
498
- ollama = OllamaInferenceEngine(model_name="llama3.1:8b-instruct-q8_0")
499
-
500
- # Define an extractor
501
- extractor = BasicFrameExtractor(ollama, prompt_template)
502
-
503
- # Apply text content to prompt template
504
- prompt_text = extractor._get_user_prompt(text_content="<some text...>")
505
- print(prompt_text)
506
- ```
507
-
508
- The ```prompt_text``` is the text content filled in the placeholder spot.
509
-
510
- ```
511
- Below is a medical note. Your task is to extract diagnosis information.
512
- Your output should include:
513
- "Diagnosis": extract diagnosis names,
514
- "Datetime": date/ time of diagnosis,
515
- "Status": status of present, history, or family history
516
- Your output should follow a JSON format:
517
- [
518
- {"Diagnosis": <exact words as in the document>, "Datetime": <diagnosis datetime>, "Status": <one of "present", "history">},
519
- {"Diagnosis": <exact words as in the document>, "Datetime": <diagnosis datetime>, "Status": <one of "present", "history">},
520
- ...
521
- ]
522
- Below is the medical note:
523
- "<some text...>"
524
- ```
525
-
526
- #### Placeholder
527
- When only one placeholder is defined in the prompt template, the ```text_content``` can be a string or a dictionary with one key (regardless of the key name). When multiple placeholders are defined in the prompt template, the ```text_content``` should be a dictionary with:
528
-
529
- ```python
530
- {"<placeholder 1>": "<some text>", "<placeholder 2>": "<some text>"...}
531
- ```
532
- For example,
533
-
534
- ```python
535
- prompt_template = """
536
- Below is a medical note. Your task is to extract diagnosis information.
537
-
538
- # Backgound knowledge
539
- {{knowledge}}
540
- Your output should include:
541
- "Diagnosis": extract diagnosis names,
542
- "Datetime": date/ time of diagnosis,
543
- "Status": status of present, history, or family history
544
-
545
- Your output should follow a JSON format:
546
- [
547
- {"Diagnosis": <exact words as in the document>, "Datetime": <diagnosis datetime>, "Status": <one of "present", "history">},
548
- {"Diagnosis": <exact words as in the document>, "Datetime": <diagnosis datetime>, "Status": <one of "present", "history">},
549
- ...
550
- ]
551
-
552
- Below is the medical note:
553
- "{{note}}"
554
- """
555
- inference_engine = OllamaInferenceEngine(model_name="llama3.1:8b-instruct-q8_0")
556
- extractor = BasicFrameExtractor(inference_engine, prompt_template)
557
- prompt_text = extractor._get_user_prompt(text_content={"knowledge": "<some text...>",
558
- "note": "<some text...>")
559
- print(prompt_text)
560
- ```
561
- Note that the keys in ```text_content``` must match the placeholder names defined in ```{{}}```.
562
-
563
- #### Prompt writing guide
564
- The quality of the prompt template can significantly impact the performance of information extraction. Also, the schema defined in prompt templates is dependent on the choice of extractors. When designing a prompt template schema, it is important to consider which extractor will be used.
565
-
566
- The ```Extractor``` class provides documentation and examples for prompt template writing.
567
-
568
- ```python
569
- from llm_ie.extractors import BasicFrameExtractor
570
-
571
- print(BasicFrameExtractor.get_prompt_guide())
572
- ```
573
-
574
- ### Prompt Editor LLM Agent
575
- The prompt editor is an LLM agent that help users write prompt templates following the defined schema and guideline of each extractor. Chat with the promtp editor:
576
-
577
- ```python
578
- from llm_ie.prompt_editor import PromptEditor
579
- from llm_ie.extractors import BasicFrameExtractor
580
- from llm_ie.engines import OllamaInferenceEngine
581
-
582
- # Define an LLM inference engine
583
- inference_engine = OllamaInferenceEngine(model_name="llama3.1:8b-instruct-q8_0")
584
-
585
- # Define editor
586
- editor = PromptEditor(inference_engine, BasicFrameExtractor)
587
-
588
- editor.chat()
589
- ```
590
-
591
- In a terminal environment, an interactive chat session will start:
592
- <div align="left"><img src=doc_asset/readme_img/terminal_chat.PNG width=1000 ></div>
593
-
594
- In the Jupyter/IPython environment, an ipywidgets session will start:
595
- <div align="left"><img src=doc_asset/readme_img/IPython_chat.PNG width=1000 ></div>
596
-
597
-
598
- We can also use the `rewrite()` and `comment()` methods to programmingly interact with the prompt editor:
599
- 1. start with a casual description of the task
600
- 2. have the prompt editor generate a prompt template as the starting point
601
- 3. manually revise the prompt template
602
- 4. have the prompt editor to comment/ rewrite it
603
-
604
- ```python
605
- from llm_ie.prompt_editor import PromptEditor
606
- from llm_ie.extractors import BasicFrameExtractor
607
- from llm_ie.engines import OllamaInferenceEngine
608
-
609
- # Define an LLM inference engine
610
- inference_engine = OllamaInferenceEngine(model_name="llama3.1:8b-instruct-q8_0")
611
-
612
- # Define editor
613
- editor = PromptEditor(inference_engine, BasicFrameExtractor)
614
-
615
- # Have editor to generate initial prompt template
616
- initial_version = editor.rewrite("Extract treatment events from the discharge summary.")
617
- print(initial_version)
618
- ```
619
- The editor generated a ```initial_version``` as below:
620
-
621
- ```
622
- # Task description
623
- The paragraph below contains information about treatment events in a patient's discharge summary. Please carefully review it and extract the treatment events, including any relevant details such as medications or procedures. Note that each treatment event may be nested under a specific section of the discharge summary.
624
-
625
- # Schema definition
626
- Your output should contain:
627
- "TreatmentEvent" which is the name of the treatment,
628
- If applicable, "Medication" which is the medication used for the treatment,
629
- If applicable, "Procedure" which is the procedure performed during the treatment,
630
- "Evidence" which is the EXACT sentence in the text where you found the TreatmentEvent from
631
-
632
- # Output format definition
633
- Your output should follow JSON format, for example:
634
- [
635
- {"TreatmentEvent": "<Treatment event name>", "Medication": "<name of medication>", "Procedure": "<name of procedure>", "Evidence": "<exact sentence from the text>"},
636
- {"TreatmentEvent": "<Treatment event name>", "Medication": "<name of medication>", "Procedure": "<name of procedure>", "Evidence": "<exact sentence from the text>"}
637
- ]
638
-
639
- # Additional hints
640
- Your output should be 100% based on the provided content. DO NOT output fake information.
641
- If there is no specific medication or procedure, just omit the corresponding key.
642
-
643
- # Input placeholder
644
- Below is the discharge summary:
645
- {{input}}
646
- ```
647
- Manually reviewing it and thinking about our needs, we found certain issues:
648
- 1. The task description is not specific enough. This is expected since the editor does not have access to the real document.
649
- 2. Depending on the project, we might not need evidence text. Outputing it consumes more output tokens.
650
-
651
- Therefore, we manually revised the prompt template as below:
652
-
653
- ```python
654
- manually_revised = """
655
- # Task description
656
- The paragraph below is a patient's discharge summary. Please carefully review it and extract the treatment events, including any relevant details such as medications or procedures. Note that each treatment event may be nested under a specific section of the discharge summary.
657
-
658
- # Schema definition
659
- Your output should contain:
660
- "TreatmentEvent" which is the name of the treatment,
661
- If applicable, "Medication" which is the medication used for the treatment,
662
- If applicable, "Procedure" which is the procedure performed during the treatment
663
-
664
- # Output format definition
665
- Your output should follow JSON format, for example:
666
- [
667
- {"TreatmentEvent": "<Treatment event name>", "Medication": "<name of medication>", "Procedure": "<name of procedure>"},
668
- {"TreatmentEvent": "<Treatment event name>", "Medication": "<name of medication>", "Procedure": "<name of procedure>"}
669
- ]
670
-
671
- # Additional hints
672
- Your output should be 100% based on the provided content. DO NOT output fake information.
673
- If there is no specific medication or procedure, just omit the corresponding key.
674
-
675
- # Input placeholder
676
- Below is the discharge summary:
677
- {{input}}
678
- """
679
- ```
680
-
681
- Now we have the editor to comment our revision.
682
-
683
- ```python
684
- comment = editor.comment(manually_revised)
685
-
686
- print(comment)
687
- ```
688
- The comments from the editor are:
689
-
690
- ```
691
- Here's an analysis of the draft prompt based on the prompt guideline:
692
-
693
- **1. Task description**: The task description is clear and concise, explaining that the goal is to extract treatment events from a patient's discharge summary.
694
-
695
- **2. Schema definition**: The schema definition is well-defined, specifying three keys: "TreatmentEvent", "Medication", and "Procedure". However, it would be helpful to provide more context on what constitutes a "treatment event" and how to distinguish between medication and procedure information.
696
-
697
- **3. Output format definition**: The output format is correctly specified as JSON, with an example provided for clarity.
698
-
699
- **4. Additional hints**: The additional hints are clear and concise, emphasizing the importance of extracting only real information from the text and omitting fake data.
700
-
701
- **5. Input placeholder**: The input placeholder is present, but it would be helpful to provide a more detailed description of what type of discharge summary is expected (e.g., medical history, treatment plan, etc.).
702
-
703
- Overall, the draft prompt is well-structured and easy to follow. However, providing more context and clarity on certain aspects, such as the definition of "treatment event" and the distinction between medication and procedure information, would make it even more effective.
704
-
705
- Rating: 8/10
706
-
707
- Recommendations:
708
-
709
- * Provide a more detailed description of what constitutes a "treatment event".
710
- * Clarify how to distinguish between medication and procedure information.
711
- * Consider adding an example of a discharge summary to help illustrate the task.
712
- ```
713
-
714
- After a few iterations of revision, we will have a high-quality prompt template for the information extraction pipeline.
715
-
716
- ### Extractor
717
- An extractor implements a prompting method for information extraction. There are two extractor families: ```FrameExtractor``` and ```RelationExtractor```.
718
- The ```FrameExtractor``` extracts named entities with attributes ("frames"). The ```RelationExtractor``` extracts the relations (and relation types) between frames.
719
-
720
- #### FrameExtractor
721
- The ```BasicFrameExtractor``` directly prompts LLM to generate a list of dictionaries. Each dictionary is then post-processed into a frame. The ```ReviewFrameExtractor``` is based on the ```BasicFrameExtractor``` but adds a review step after the initial extraction to boost sensitivity and improve performance. ```SentenceFrameExtractor``` gives LLM the entire document upfront as a reference, then prompts LLM sentence by sentence and collects per-sentence outputs. ```SentenceReviewFrameExtractor``` is the combined version of ```ReviewFrameExtractor``` and ```SentenceFrameExtractor``` which each sentence is extracted and reviewed. The ```SentenceCoTFrameExtractor``` implements chain of thoughts (CoT). It first analyzes a sentence, then extract frames based on the CoT. To learn about an extractor, use the class method ```get_prompt_guide()``` to print out the prompt guide.
722
-
723
- Since the output entity text from LLMs might not be consistent with the original text due to the limitations of LLMs, we apply fuzzy search in post-processing to find the accurate entity span. In the `FrameExtractor.extract_frames()` method, setting parameter `fuzzy_match=True` applies Jaccard similarity matching.
724
-
725
- <details>
726
- <summary>BasicFrameExtractor</summary>
727
-
728
- The ```BasicFrameExtractor``` directly prompts LLM to generate a list of dictionaries. Each dictionary is then post-processed into a frame. The ```text_content``` holds the input text as a string, or as a dictionary (if prompt template has multiple input placeholders). The ```entity_key``` defines which JSON key should be used as entity text. It must be consistent with the prompt template.
729
-
730
- ```python
731
- from llm_ie.extractors import BasicFrameExtractor
732
-
733
- extractor = BasicFrameExtractor(inference_engine, prompt_temp)
734
- frames = extractor.extract_frames(text_content=text, entity_key="Diagnosis", case_sensitive=False, fuzzy_match=True, stream=True)
735
- ```
736
-
737
- Use the ```get_prompt_guide()``` method to inspect the prompt template guideline for ```BasicFrameExtractor```.
738
-
739
- ```python
740
- from llm_ie.extractors import BasicFrameExtractor
741
-
742
- print(BasicFrameExtractor.get_prompt_guide())
743
- ```
744
-
745
- ```
746
- Prompt Template Design:
747
-
748
- 1. Task Description:
749
- Provide a detailed description of the task, including the background and the type of task (e.g., named entity recognition).
750
-
751
- 2. Schema Definition:
752
- List the key concepts that should be extracted, and provide clear definitions for each one.
753
-
754
- 3. Output Format Definition:
755
- The output should be a JSON list, where each element is a dictionary representing a frame (an entity along with its attributes). Each dictionary must include a key that holds the entity text. This key can be named "entity_text" or anything else depend on the context. The attributes can either be flat (e.g., {"entity_text": "<entity_text>", "attr1": "<attr1>", "attr2": "<attr2>"}) or nested (e.g., {"entity_text": "<entity_text>", "attributes": {"attr1": "<attr1>", "attr2": "<attr2>"}}).
756
-
757
- 4. Optional: Hints:
758
- Provide itemized hints for the information extractors to guide the extraction process.
759
-
760
- 5. Optional: Examples:
761
- Include examples in the format:
762
- Input: ...
763
- Output: ...
764
-
765
- 6. Input Placeholder:
766
- The template must include a placeholder in the format {{<placeholder_name>}} for the input text. The placeholder name can be customized as needed.
767
-
768
- ......
769
- ```
770
- </details>
771
-
772
- <details>
773
- <summary>ReviewFrameExtractor</summary>
774
-
775
- The ```ReviewFrameExtractor``` is based on the ```BasicFrameExtractor``` but adds a review step after the initial extraction to boost sensitivity and improve performance. The ```review_prompt``` and ```review_mode``` are required when constructing the ```ReviewFrameExtractor```.
776
-
777
- There are two review modes:
778
- 1. **Addition mode**: add more frames while keeping current. This is efficient for boosting recall.
779
- 2. **Revision mode**: regenerate frames (add new and delete existing).
780
-
781
- Under the **Addition mode**, the ```review_prompt``` needs to instruct the LLM not to regenerate existing extractions:
782
-
783
- *... You should ONLY add new diagnoses. DO NOT regenerate the entire answer.*
784
-
785
- The ```review_mode``` should be set to ```review_mode="addition"```
786
-
787
- Under the **Revision mode**, the ```review_prompt``` needs to instruct the LLM to regenerate:
788
-
789
- *... Regenerate your output.*
790
-
791
- The ```review_mode``` should be set to ```review_mode="revision"```
792
-
793
- ```python
794
- review_prompt = "Review the input and your output again. If you find some diagnosis was missed, add them to your output. Regenerate your output."
795
-
796
- extractor = ReviewFrameExtractor(inference_engine, prompt_temp, review_prompt, review_mode="revision")
797
- frames = extractor.extract_frames(text_content=text, entity_key="Diagnosis", stream=True)
798
- ```
799
- </details>
800
-
801
- <details>
802
- <summary>SentenceFrameExtractor</summary>
803
-
804
- The ```SentenceFrameExtractor``` instructs the LLM to extract sentence by sentence. The reason is to ensure the accuracy of frame spans. It also prevents LLMs from overseeing sections/ sentences. Empirically, this extractor results in better recall than the ```BasicFrameExtractor``` in complex tasks.
805
-
806
- For concurrent extraction (recommended), the `async/await` feature is used to speed up inferencing. The `concurrent_batch_size` sets the batch size of sentences to be processed in cocurrent.
807
-
808
- ```python
809
- from llm_ie.extractors import SentenceFrameExtractor
810
-
811
- extractor = SentenceFrameExtractor(inference_engine, prompt_temp)
812
- frames = extractor.extract_frames(text_content=text, entity_key="Diagnosis", case_sensitive=False, fuzzy_match=True, concurrent=True, concurrent_batch_size=32)
813
- ```
814
-
815
- The `context_sentences` sets number of sentences before and after the sentence of interest to provide additional context. When `context_sentences=2`, 2 sentences before and 2 sentences after are included in the user prompt as context. When `context_sentences="all"`, the entire document is included as context. When `context_sentences=0`, no context is provided and LLM will only extract based on the current sentence of interest.
816
-
817
- ```python
818
- from llm_ie.extractors import SentenceFrameExtractor
819
-
820
- extractor = SentenceFrameExtractor(inference_engine=inference_engine,
821
- prompt_template=prompt_temp,
822
- context_sentences=2)
823
- frames = extractor.extract_frames(text_content=text, entity_key="Diagnosis", case_sensitive=False, fuzzy_match=True, stream=True)
824
- ```
825
-
826
- For the sentence:
827
-
828
- *The patient has a history of hypertension, hyperlipidemia, and Type 2 diabetes mellitus.*
829
-
830
- The context is "previous sentence 2" "previous sentence 1" "the sentence of interest" "proceeding sentence 1" "proceeding sentence 2":
831
-
832
- *Emily Brown, MD (Cardiology), Dr. Michael Green, MD (Pulmonology)
833
-
834
- *#### Reason for Admission*
835
- *John Doe, a 49-year-old male, was admitted to the hospital with complaints of chest pain, shortness of breath, and dizziness. The patient has a history of hypertension, hyperlipidemia, and Type 2 diabetes mellitus. #### History of Present Illness*
836
- *The patient reported that the chest pain started two days prior to admission. The pain was described as a pressure-like sensation in the central chest, radiating to the left arm and jaw.*
837
-
838
- </details>
839
-
840
- <details>
841
- <summary>SentenceReviewFrameExtractor</summary>
842
-
843
- The `SentenceReviewFrameExtractor` performs sentence-level extraction and review.
844
-
845
- ```python
846
- from llm_ie.extractors import SentenceReviewFrameExtractor
847
-
848
- extractor = SentenceReviewFrameExtractor(inference_engine, prompt_temp, review_mode="revision")
849
- frames = extractor.extract_frames(text_content=note_text, entity_key="Diagnosis", stream=True)
850
- ```
851
-
852
- ```
853
- Sentence:
854
- #### History of Present Illness
855
- The patient reported that the chest pain started two days prior to admission.
856
-
857
- Initial Output:
858
- [
859
- {"Diagnosis": "chest pain", "Date": "two days prior to admission", "Status": "reported"}
860
- ]
861
- Review:
862
- [
863
- {"Diagnosis": "admission", "Date": null, "Status": null}
864
- ]
865
- ```
866
-
867
- </details>
868
-
869
- <details>
870
- <summary>SentenceCoTFrameExtractor</summary>
871
-
872
- The `SentenceCoTFrameExtractor` processes document sentence-by-sentence. For each sentence, it first generate an analysis paragraph in `<Analysis>... </Analysis>`(chain-of-thought). Then output extraction in JSON in `<Outputs>... </Outputs>`, similar to `SentenceFrameExtractor`.
873
-
874
- ```python
875
- from llm_ie.extractors import SentenceCoTFrameExtractor
876
-
877
- extractor = SentenceCoTFrameExtractor(inference_engine, CoT_prompt_temp)
878
- frames = extractor.extract_frames(text_content=note_text, entity_key="Diagnosis", stream=True)
879
- ```
880
-
881
- ```
882
- Sentence:
883
- #### Discharge Medications
884
- - Aspirin 81 mg daily
885
- - Clopidogrel 75 mg daily
886
- - Atorvastatin 40 mg daily
887
- - Metoprolol 50 mg twice daily
888
- - Lisinopril 20 mg daily
889
- - Metformin 1000 mg twice daily
890
-
891
- #### Discharge Instructions
892
- John Doe was advised to follow a heart-healthy diet, engage in regular physical activity, and monitor his blood glucose levels.
893
-
894
- CoT:
895
- <Analysis>
896
- The given text does not explicitly mention a diagnosis, but rather lists the discharge medications and instructions for the patient. However, we can infer that the patient has been diagnosed with conditions that require these medications, such as high blood pressure, high cholesterol, and diabetes.
897
-
898
- </Analysis>
899
-
900
- <Outputs>
901
- [
902
- {"Diagnosis": "hypertension", "Date": null, "Status": "confirmed"},
903
- {"Diagnosis": "hyperlipidemia", "Date": null, "Status": "confirmed"},
904
- {"Diagnosis": "Type 2 diabetes mellitus", "Date": null, "Status": "confirmed"}
905
- ]
906
- </Outputs>
907
- ```
908
-
909
- </details>
910
-
911
- #### RelationExtractor
912
- Relation extractors prompt LLM with combinations of two frames from a document (```LLMInformationExtractionDocument```) and extract relations.
913
- The ```BinaryRelationExtractor``` extracts binary relations (yes/no) between two frames. The ```MultiClassRelationExtractor``` extracts relations and assign relation types ("multi-class").
914
-
915
- An important feature of the relation extractors is that users are required to define a ```possible_relation_func``` or ```possible_relation_types_func``` function for the extractors. The reason is, there are too many possible combinations of two frames (N choose 2 combinations). The ```possible_relation_func``` helps rule out impossible combinations and therefore, reduce the LLM inferencing burden.
916
-
917
- <details>
918
- <summary>BinaryRelationExtractor</summary>
919
-
920
- Use the get_prompt_guide() method to inspect the prompt template guideline for BinaryRelationExtractor.
921
- ```python
922
- from llm_ie.extractors import BinaryRelationExtractor
923
-
924
- print(BinaryRelationExtractor.get_prompt_guide())
925
- ```
926
-
927
- ```
928
- Prompt Template Design:
929
-
930
- 1. Task description:
931
- Provide a detailed description of the task, including the background and the type of task (e.g., binary relation extraction). Mention the region of interest (ROI) text.
932
- 2. Schema definition:
933
- List the criterion for relation (True) and for no relation (False).
934
-
935
- 3. Output format definition:
936
- The ouptut must be a dictionary with a key "Relation" (i.e., {"Relation": "<True or False>"}).
937
-
938
- 4. (optional) Hints:
939
- Provide itemized hints for the information extractors to guide the extraction process.
940
-
941
- 5. (optional) Examples:
942
- Include examples in the format:
943
- Input: ...
944
- Output: ...
945
-
946
- 6. Entity 1 full information:
947
- Include a placeholder in the format {{<frame_1>}}
948
-
949
- 7. Entity 2 full information:
950
- Include a placeholder in the format {{<frame_2>}}
951
-
952
- 8. Input placeholders:
953
- The template must include a placeholder "{{roi_text}}" for the ROI text.
954
-
955
-
956
- Example:
957
-
958
- # Task description
959
- This is a binary relation extraction task. Given a region of interest (ROI) text and two entities from a medical note, indicate the relation existence between the two entities.
960
-
961
- # Schema definition
962
- True: if there is a relationship between a medication name (one of the entities) and its strength or frequency (the other entity).
963
- False: Otherwise.
964
-
965
- # Output format definition
966
- Your output should follow the JSON format:
967
- {"Relation": "<True or False>"}
968
-
969
- I am only interested in the content between []. Do not explain your answer.
970
-
971
- # Hints
972
- 1. Your input always contains one medication entity and 1) one strength entity or 2) one frequency entity.
973
- 2. Pay attention to the medication entity and see if the strength or frequency is for it.
974
- 3. If the strength or frequency is for another medication, output False.
975
- 4. If the strength or frequency is for the same medication but at a different location (span), output False.
976
-
977
- # Entity 1 full information:
978
- {{frame_1}}
979
-
980
- # Entity 2 full information:
981
- {{frame_2}}
982
-
983
- # Input placeholders
984
- ROI Text with the two entities annotated with <entity_1> and <entity_2>:
985
- "{{roi_text}}"
986
- ```
987
-
988
- As an example, we define the ```possible_relation_func``` function:
989
- - if the two frames are > 500 characters apart, we assume no relation (False)
990
- - if the two frames are "Medication" and "Strength", or "Medication" and "Frequency", there could be relations (True)
991
-
992
- ```python
993
- def possible_relation_func(frame_1, frame_2) -> bool:
994
- """
995
- This function pre-process two frames and outputs a bool indicating whether the two frames could be related.
996
- """
997
- # if the distance between the two frames are > 500 characters, assume no relation.
998
- if abs(frame_1.start - frame_2.start) > 500:
999
- return False
1000
-
1001
- # if the entity types are "Medication" and "Strength", there could be relations.
1002
- if (frame_1.attr["entity_type"] == "Medication" and frame_2.attr["entity_type"] == "Strength") or \
1003
- (frame_2.attr["entity_type"] == "Medication" and frame_1.attr["entity_type"] == "Strength"):
1004
- return True
1005
-
1006
- # if the entity types are "Medication" and "Frequency", there could be relations.
1007
- if (frame_1.attr["entity_type"] == "Medication" and frame_2.attr["entity_type"] == "Frequency") or \
1008
- (frame_2.attr["entity_type"] == "Medication" and frame_1.attr["entity_type"] == "Frequency"):
1009
- return True
1010
-
1011
- # Otherwise, no relation.
1012
- return False
1013
- ```
1014
-
1015
- In the ```BinaryRelationExtractor``` constructor, we pass in the prompt template and ```possible_relation_func```.
1016
-
1017
- ```python
1018
- from llm_ie.extractors import BinaryRelationExtractor
1019
-
1020
- extractor = BinaryRelationExtractor(inference_engine, prompt_template=prompt_template, possible_relation_func=possible_relation_func)
1021
- # Extract binary relations with concurrent mode (faster)
1022
- relations = extractor.extract_relations(doc, concurrent=True)
1023
-
1024
- # To print out the step-by-step, use the `concurrent=False` and `stream=True` options
1025
- relations = extractor.extract_relations(doc, concurrent=False, stream=True)
1026
- ```
1027
-
1028
- </details>
1029
-
1030
-
1031
- <details>
1032
- <summary>MultiClassRelationExtractor</summary>
1033
-
1034
- The main difference from ```BinaryRelationExtractor``` is that the ```MultiClassRelationExtractor``` allows specifying relation types. The prompt template guideline has an additional placeholder for possible relation types ```{{pos_rel_types}}```.
1035
-
1036
- ```python
1037
- print(MultiClassRelationExtractor.get_prompt_guide())
1038
- ```
1039
-
1040
- ```
1041
- Prompt Template Design:
1042
-
1043
- 1. Task description:
1044
- Provide a detailed description of the task, including the background and the type of task (e.g., binary relation extraction). Mention the region of interest (ROI) text.
1045
- 2. Schema definition:
1046
- List the criterion for relation (True) and for no relation (False).
1047
-
1048
- 3. Output format definition:
1049
- This section must include a placeholder "{{pos_rel_types}}" for the possible relation types.
1050
- The ouptut must be a dictionary with a key "RelationType" (i.e., {"RelationType": "<relation type or No Relation>"}).
1051
-
1052
- 4. (optional) Hints:
1053
- Provide itemized hints for the information extractors to guide the extraction process.
1054
-
1055
- 5. (optional) Examples:
1056
- Include examples in the format:
1057
- Input: ...
1058
- Output: ...
1059
-
1060
- 6. Entity 1 full information:
1061
- Include a placeholder in the format {{<frame_1>}}
1062
-
1063
- 7. Entity 2 full information:
1064
- Include a placeholder in the format {{<frame_2>}}
1065
-
1066
- 8. Input placeholders:
1067
- The template must include a placeholder "{{roi_text}}" for the ROI text.
1068
-
1069
-
1070
-
1071
- Example:
1072
-
1073
- # Task description
1074
- This is a multi-class relation extraction task. Given a region of interest (ROI) text and two frames from a medical note, classify the relation types between the two frames.
1075
-
1076
- # Schema definition
1077
- Strength-Drug: this is a relationship between the drug strength and its name.
1078
- Dosage-Drug: this is a relationship between the drug dosage and its name.
1079
- Duration-Drug: this is a relationship between a drug duration and its name.
1080
- Frequency-Drug: this is a relationship between a drug frequency and its name.
1081
- Form-Drug: this is a relationship between a drug form and its name.
1082
- Route-Drug: this is a relationship between the route of administration for a drug and its name.
1083
- Reason-Drug: this is a relationship between the reason for which a drug was administered (e.g., symptoms, diseases, etc.) and a drug name.
1084
- ADE-Drug: this is a relationship between an adverse drug event (ADE) and a drug name.
1085
-
1086
- # Output format definition
1087
- Choose one of the relation types listed below or choose "No Relation":
1088
- {{pos_rel_types}}
1089
-
1090
- Your output should follow the JSON format:
1091
- {"RelationType": "<relation type or No Relation>"}
1092
-
1093
- I am only interested in the content between []. Do not explain your answer.
1094
-
1095
- # Hints
1096
- 1. Your input always contains one medication entity and 1) one strength entity or 2) one frequency entity.
1097
- 2. Pay attention to the medication entity and see if the strength or frequency is for it.
1098
- 3. If the strength or frequency is for another medication, output "No Relation".
1099
- 4. If the strength or frequency is for the same medication but at a different location (span), output "No Relation".
1100
-
1101
- # Entity 1 full information:
1102
- {{frame_1}}
1103
-
1104
- # Entity 2 full information:
1105
- {{frame_2}}
1106
-
1107
- # Input placeholders
1108
- ROI Text with the two entities annotated with <entity_1> and <entity_2>:
1109
- "{{roi_text}}"
1110
- ```
1111
-
1112
- As an example, we define the ```possible_relation_types_func``` :
1113
- - if the two frames are > 500 characters apart, we assume "No Relation" (output [])
1114
- - if the two frames are "Medication" and "Strength", the only possible relation types are "Strength-Drug" or "No Relation"
1115
- - if the two frames are "Medication" and "Frequency", the only possible relation types are "Frequency-Drug" or "No Relation"
1116
-
1117
- ```python
1118
- def possible_relation_types_func(frame_1, frame_2) -> List[str]:
1119
- # If the two frames are > 500 characters apart, we assume "No Relation"
1120
- if abs(frame_1.start - frame_2.start) > 500:
1121
- return []
1122
-
1123
- # If the two frames are "Medication" and "Strength", the only possible relation types are "Strength-Drug" or "No Relation"
1124
- if (frame_1.attr["entity_type"] == "Medication" and frame_2.attr["entity_type"] == "Strength") or \
1125
- (frame_2.attr["entity_type"] == "Medication" and frame_1.attr["entity_type"] == "Strength"):
1126
- return ['Strength-Drug']
1127
-
1128
- # If the two frames are "Medication" and "Frequency", the only possible relation types are "Frequency-Drug" or "No Relation"
1129
- if (frame_1.attr["entity_type"] == "Medication" and frame_2.attr["entity_type"] == "Frequency") or \
1130
- (frame_2.attr["entity_type"] == "Medication" and frame_1.attr["entity_type"] == "Frequency"):
1131
- return ['Frequency-Drug']
1132
-
1133
- return []
1134
- ```
1135
-
1136
-
1137
- ```python
1138
- from llm_ie.extractors import MultiClassRelationExtractor
1139
-
1140
- extractor = MultiClassRelationExtractor(inference_engine, prompt_template=re_prompt_template,
1141
- possible_relation_types_func=possible_relation_types_func)
1142
-
1143
- # Extract multi-class relations with concurrent mode (faster)
1144
- relations = extractor.extract_relations(doc, concurrent=True)
1145
-
1146
- # To print out the step-by-step, use the `concurrent=False` and `stream=True` options
1147
- relations = extractor.extract_relations(doc, concurrent=False, stream=True)
1148
- ```
1149
-
1150
- </details>
1151
-
1152
- ### Visualization
1153
-
1154
- <div align="center"><img src="doc_asset/readme_img/visualization.PNG" width=95% ></div>
1155
-
1156
- The `LLMInformationExtractionDocument` class supports named entity, entity attributes, and relation visualization. The implementation is through our plug-in package [ie-viz](https://github.com/daviden1013/ie-viz). Check the example Jupyter Notebook [NER + RE for Drug, Strength, Frequency](demo/medication_relation_extraction.ipynb) for a working demo.
1157
-
1158
- ```cmd
1159
- pip install ie-viz
1160
- ```
1161
-
1162
- The `viz_serve()` method starts a Flask App on localhost port 5000 by default.
1163
- ```python
1164
- from llm_ie.data_types import LLMInformationExtractionDocument
1165
-
1166
- # Define document
1167
- doc = LLMInformationExtractionDocument(doc_id="Medical note",
1168
- text=note_text)
1169
- # Add extracted frames and relations to document
1170
- doc.add_frames(frames)
1171
- doc.add_relations(relations)
1172
- # Visualize the document
1173
- doc.viz_serve()
1174
- ```
1175
-
1176
- Alternatively, the `viz_render()` method returns a self-contained (HTML + JS + CSS) string. Save it to file and open with a browser.
1177
- ```python
1178
- html = doc.viz_render()
1179
-
1180
- with open("Medical note.html", "w") as f:
1181
- f.write(html)
1182
- ```
1183
-
1184
- To customize colors for different entities, use `color_attr_key` (simple) or `color_map_func` (advanced).
1185
-
1186
- The `color_attr_key` automatically assign colors based on the specified attribute key. For example, "EntityType".
1187
- ```python
1188
- doc.viz_serve(color_attr_key="EntityType")
1189
- ```
1190
-
1191
- The `color_map_func` allow users to define a custom entity-color mapping function. For example,
1192
- ```python
1193
- def color_map_func(entity) -> str:
1194
- if entity['attr']['<attribute key>'] == "<a certain value>":
1195
- return "#7f7f7f"
1196
- else:
1197
- return "#03A9F4"
1198
-
1199
- doc.viz_serve(color_map_func=color_map_func)
1200
- ```
1201
-
1202
- ## Benchmarks
1203
- We benchmarked the frame and relation extractors on biomedical information extraction tasks. The results and experiment code is available on [this page](https://github.com/daviden1013/LLM-IE_Benchmark).
1204
-
1205
-
1206
- ## Citation
1207
- For more information and benchmarks, please check our paper:
1208
- ```bibtex
1209
- @article{hsu2024llm,
1210
- title={LLM-IE: A Python Package for Generative Information Extraction with Large Language Models},
1211
- author={Hsu, Enshuo and Roberts, Kirk},
1212
- journal={arXiv preprint arXiv:2411.11779},
1213
- year={2024}
1214
- }
1215
- ```