cat-llm 0.0.33__py3-none-any.whl → 0.0.34__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: cat-llm
3
- Version: 0.0.33
3
+ Version: 0.0.34
4
4
  Summary: A tool for categorizing text data and images using LLMs and vision models
5
5
  Project-URL: Documentation, https://github.com/chrissoria/cat-llm#readme
6
6
  Project-URL: Issues, https://github.com/chrissoria/cat-llm/issues
@@ -39,6 +39,12 @@ Description-Content-Type: text/markdown
39
39
  - [Configuration](#configuration)
40
40
  - [Supported Models](#supported-models)
41
41
  - [API Reference](#api-reference)
42
+ - [explore_corpus()](#explore_corpus)
43
+ - [explore_common_categories()](#explore_common_categories)
44
+ - [multi_class()](#multi_class)
45
+ - [image_score()](#image_score)
46
+ - [image_features()](#image_features)
47
+ - [cerad_drawn_score()](#cerad_drawn_score)
42
48
  - [Academic Research](#academic-research)
43
49
  - [License](#license)
44
50
 
@@ -180,7 +186,7 @@ print(categories)
180
186
  Performs multi-label classification of text responses into user-defined categories, returning structured results with optional CSV export.
181
187
 
182
188
  **Methodology:**
183
- Processes each text response individually, assigning one or more categories from the provided list. Supports flexible output formatting and optional saving of results to CSV for easy integration with data analysis workflows[2].
189
+ Processes each text response individually, assigning one or more categories from the provided list. Supports flexible output formatting and optional saving of results to CSV for easy integration with data analysis workflows.
184
190
 
185
191
  **Parameters:**
186
192
  - `survey_question` (str): The survey question being analyzed
@@ -221,10 +227,173 @@ move_reasons = cat.multi_class(
221
227
  api_key="OPENAI_API_KEY")
222
228
  ```
223
229
 
230
+ ### `image_multi_class()`
231
+
232
+ Performs multi-label image classification into user-defined categories, returning structured results with optional CSV export.
233
+
234
+ **Methodology:**
235
+ Processes each image individually, assigning one or more categories from the provided list. Supports flexible output formatting and optional saving of results to CSV for easy integration with data analysis workflows.
236
+
237
+ **Parameters:**
238
+ - `image_description` (str): A description of what the model should expect to see
239
+ - `image_input` (list): List of file paths or a folder to pull file paths from
240
+ - `categories` (list): List of predefined categories for classification
241
+ - `api_key` (str): API key for the LLM service
242
+ - `user_model` (str, default="gpt-4o"): Specific model to use
243
+ - `creativity` (float, default=0): Temperature/randomness setting (0.0-1.0)
244
+ - `safety` (bool, default=False): Enable safety checks on responses and saves to CSV at each API call step
245
+ - `filename` (str, default="categorized_data.csv"): Filename for CSV output
246
+ - `save_directory` (str, optional): Directory path to save the CSV file
247
+ - `model_source` (str, default="OpenAI"): Model provider ("OpenAI", "Anthropic", "Perplexity", "Mistral")
248
+
249
+ **Returns:**
250
+ - `pandas.DataFrame`: DataFrame with classification results, columns formatted as specified
251
+
252
+ **Example:**
253
+
254
+ ```
255
+ import catllm as cat
256
+
257
+ user_categories = ["has a cat somewhere in it",
258
+ "looks cartoonish",
259
+ "Adrian Brody is in it"]
260
+
261
+ description = "Should be an image of a child's drawing"
262
+
263
+ image_categories = cat.image_multi_class(
264
+ image_description=description,
265
+ image_input= ['desktop/image1.jpg','desktop/image2.jpg', desktop/image3.jpg'],
266
+ user_model="gpt-4o",
267
+ creativity=0,
268
+ categories=user_categories,
269
+ safety =TRUE,
270
+ api_key="OPENAI_API_KEY")
271
+ ```
272
+
273
+ ### `image_score()`
274
+
275
+ Performs quality scoring of images against a reference description, returning structured results with optional CSV export.
276
+
277
+ **Methodology:**
278
+ Processes each image individually, assigning a quality score on a 5-point scale based on similarity to the expected description:
279
+
280
+ - **1**: No meaningful similarity (fundamentally different)
281
+ - **2**: Barely recognizable similarity (25% match)
282
+ - **3**: Partial match (50% key features)
283
+ - **4**: Strong alignment (75% features)
284
+ - **5**: Near-perfect match (90%+ similarity)
285
+
286
+ Supports flexible output formatting and optional saving of results to CSV for easy integration with data analysis workflows[5].
287
+
288
+ **Parameters:**
289
+ - `reference_image_description` (str): A description of what the model should expect to see
290
+ - `image_input` (list): List of image file paths or folder path containing images
291
+ - `reference_image` (str): A file path to the reference image
292
+ - `api_key` (str): API key for the LLM service
293
+ - `user_model` (str, default="gpt-4o"): Specific vision model to use
294
+ - `creativity` (float, default=0): Temperature/randomness setting (0.0-1.0)
295
+ - `safety` (bool, default=False): Enable safety checks and save results at each API call step
296
+ - `filename` (str, default="image_scores.csv"): Filename for CSV output
297
+ - `save_directory` (str, optional): Directory path to save the CSV file
298
+ - `model_source` (str, default="OpenAI"): Model provider ("OpenAI", "Anthropic", "Perplexity", "Mistral")
299
+
300
+ **Returns:**
301
+ - `pandas.DataFrame`: DataFrame with image paths, quality scores, and analysis details
302
+
303
+ **Example:**
304
+
305
+ ```
306
+ import catllm as cat
307
+
308
+ image_scores = cat.image_score(
309
+ reference_image_description='Adrien Brody sitting in a lawn chair,
310
+ image_input= ['desktop/image1.jpg','desktop/image2.jpg', desktop/image3.jpg'],
311
+ user_model="gpt-4o",
312
+ creativity=0,
313
+ safety =TRUE,
314
+ api_key="OPENAI_API_KEY")
315
+ ```
316
+
317
+ ### `image_features()`
318
+
319
+ Extracts specific features and attributes from images, returning exact answers to user-defined questions (e.g., counts, colors, presence of objects).
320
+
321
+ **Methodology:**
322
+ Processes each image individually using vision models to extract precise information about specified features. Unlike scoring and multi-class functions, this returns factual data such as object counts, color identification, or presence/absence of specific elements. Supports flexible output formatting and optional CSV export for quantitative analysis workflows.
323
+
324
+ **Parameters:**
325
+ - `image_description` (str): A description of what the model should expect to see
326
+ - `image_input` (list): List of image file paths or folder path containing images
327
+ - `features_to_extract` (list): List of specific features to extract (e.g., ["number of people", "primary color", "contains text"])
328
+ - `api_key` (str): API key for the LLM service
329
+ - `user_model` (str, default="gpt-4o"): Specific vision model to use
330
+ - `creativity` (float, default=0): Temperature/randomness setting (0.0-1.0)
331
+ - `to_csv` (bool, default=False): Whether to save the output to a CSV file
332
+ - `safety` (bool, default=False): Enable safety checks and save results at each API call step
333
+ - `filename` (str, default="categorized_data.csv"): Filename for CSV output
334
+ - `save_directory` (str, optional): Directory path to save the CSV file
335
+ - `model_source` (str, default="OpenAI"): Model provider ("OpenAI", "Anthropic", "Perplexity", "Mistral")
336
+
337
+ **Returns:**
338
+ - `pandas.DataFrame`: DataFrame with image paths and extracted feature values for each specified attribute[1][4]
339
+
340
+ **Example:**
341
+
342
+ ```
343
+ import catllm as cat
344
+
345
+ image_scores = cat.image_features(
346
+ image_description='An AI generated image of Spongebob dancing with Patrick',
347
+ features_to_extract=['Spongebob is yellow','Both are smiling','Patrick is chunky']
348
+ image_input= ['desktop/image1.jpg','desktop/image2.jpg', desktop/image3.jpg'],
349
+ model_source= 'OpenAI',
350
+ user_model="gpt-4o",
351
+ creativity=0,
352
+ safety =TRUE,
353
+ api_key="OPENAI_API_KEY")
354
+ ```
355
+
356
+ ### `cerad_drawn_score()`
357
+
358
+ Automatically scores drawings of circles, diamonds, overlapping rectangles, and cubes according to the official Consortium to Establish a Registry for Alzheimer's Disease (CERAD) scoring system, returning structured results with optional CSV export. Works even with images that contain other drawings or writing.
359
+
360
+ **Methodology:**
361
+ Processes each image individually, evaluating the drawn shapes based on CERAD criteria. Supports optional inclusion of reference shapes within images and can provide reference examples if requested. The function outputs standardized scores facilitating reproducible analysis and integrates optional safety checks and CSV export for research workflows.
362
+
363
+ **Parameters:**
364
+ - `shape` (str): The type of shape to score (e.g., "circle", "diamond", "overlapping rectangles", "cube")
365
+ - `image_input` (list): List of image file paths or folder path containing images
366
+ - `api_key` (str): API key for the LLM service
367
+ - `user_model` (str, default="gpt-4o"): Specific model to use
368
+ - `creativity` (float, default=0): Temperature/randomness setting (0.0-1.0)
369
+ - `reference_in_image` (bool, default=False): Whether a reference shape is present in the image for comparison
370
+ - `provide_reference` (bool, default=False): Whether to provide a reference example image or description
371
+ - `safety` (bool, default=False): Enable safety checks and save results at each API call step
372
+ - `filename` (str, default="categorized_data.csv"): Filename for CSV output
373
+ - `model_source` (str, default="OpenAI"): Model provider ("OpenAI", "Anthropic", "Perplexity", "Mistral")
374
+
375
+ **Returns:**
376
+ - `pandas.DataFrame`: DataFrame with image paths, CERAD scores, and analysis details
377
+
378
+ **Example:**
379
+
380
+ ```
381
+ import catllm as cat
382
+
383
+ diamond_scores = cat.cerad_score(
384
+ shape="diamond",
385
+ image_input=df['diamond_pic_path'],
386
+ api_key=open_ai_key,
387
+ safety=True,
388
+ filename="diamond_gpt_score.csv",
389
+ )
390
+ ```
391
+
392
+
224
393
  ## Academic Research
225
394
 
226
395
  This package implements methodology from research on LLM performance in social science applications, including the UC Berkeley Social Networks Study. The package addresses reproducibility challenges in LLM-assisted research by providing standardized interfaces and consistent output formatting.
227
396
 
228
397
  ## License
229
398
 
230
- `cat-llm` is distributed under the terms of the [MIT](https://spdx.org/licenses/MIT.html) license.
399
+ `cat-llm` is distributed under the terms of the [GNU](https://www.gnu.org/licenses/gpl-3.0.en.html) license.
@@ -0,0 +1,9 @@
1
+ catllm/CERAD_functions.py,sha256=jJK6Ki-jvZAvND1zxQB1zdMfFBlYJl-zq5yCkXvcjd4,19622
2
+ catllm/__about__.py,sha256=4R_j_P8bEBhMpBRu9kdAlQ6ijuDOmpDcr-lNZJvVUtw,404
3
+ catllm/__init__.py,sha256=BpAG8nPhM3ZQRd0WqkubI_36-VCOs4eCYtGVgzz48Bs,337
4
+ catllm/image_functions.py,sha256=7nw0HDvacYUVo_VLcy-6Pi8QcmDbPtKVKPJITAK19RQ,31311
5
+ catllm/text_functions.py,sha256=K6oetWYk25PwsllWSZP4cFrz7kyxJg0plPRvpmQkCsU,16846
6
+ cat_llm-0.0.34.dist-info/METADATA,sha256=lvaM125K48B7Nf41XfgnY7CyV3miVOIuADb0hcFDo3Y,17232
7
+ cat_llm-0.0.34.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87
8
+ cat_llm-0.0.34.dist-info/licenses/LICENSE,sha256=YYp5RSBpti1KKIJO0aV0GTRgQNLIsB7TEYivC3QkCOo,788
9
+ cat_llm-0.0.34.dist-info/RECORD,,
@@ -0,0 +1,17 @@
1
+ GNU License
2
+
3
+ CatLLM is a framework for categorizing text and images in a structured output.
4
+ Copyright (C) 2025 Christopher Soria
5
+
6
+ This program is free software: you can redistribute it and/or modify
7
+ it under the terms of the GNU General Public License as published by
8
+ the Free Software Foundation, either version 3 of the License, or
9
+ (at your option) any later version.
10
+
11
+ This program is distributed in the hope that it will be useful,
12
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
13
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14
+ GNU General Public License for more details.
15
+
16
+ You should have received a copy of the GNU General Public License
17
+ along with this program. If not, see <https://www.gnu.org/licenses/>.
catllm/CERAD_functions.py CHANGED
@@ -41,9 +41,10 @@ def cerad_drawn_score(
41
41
  import glob
42
42
  import base64
43
43
  from pathlib import Path
44
+ import pkg_resources
44
45
 
45
46
  shape = shape.lower()
46
-
47
+ shape = "rectangles" if shape == "overlapping rectangles" else shape
47
48
  if shape == "circle":
48
49
  categories = ["The image contains a drawing that clearly represents a circle",
49
50
  "The image does NOT contain any drawing that resembles a circle",
@@ -107,6 +108,16 @@ def cerad_drawn_score(
107
108
  cat_num = len(categories)
108
109
  category_dict = {str(i+1): "0" for i in range(cat_num)}
109
110
  example_JSON = json.dumps(category_dict, indent=4)
111
+ #pulling in the reference image if provided
112
+ if provide_reference:
113
+ reference_image = pkg_resources.resource_filename(
114
+ 'catllm',
115
+ f'images/{shape}.png' # e.g., "circle.png"
116
+ )
117
+ ext = Path(reference_image_path).suffix[1:]
118
+ with open(reference_image_path, 'rb') as f:
119
+ encoded_ref = base64.b64encode(f.read()).decode('utf-8')
120
+ encoded_ref_image = f"data:image/{ext};base64,{encoded_ref}"
110
121
 
111
122
  link1 = []
112
123
  extracted_jsons = []
@@ -146,13 +157,21 @@ def cerad_drawn_score(
146
157
  f"No additional keys, comments, or text.\n\n"
147
158
  f"Example:\n"
148
159
  f"{example_JSON}"
149
- ),
150
- },
151
- {
152
- "type": "image_url",
153
- "image_url": {"url": encoded_image, "detail": "high"},
154
- }
160
+ )
161
+ }
155
162
  ]
163
+ # Conditionally add reference image
164
+ if provide_reference:
165
+ prompt.append({
166
+ "type": "image_url",
167
+ "image_url": {"url": reference_image, "detail": "high"}
168
+ })
169
+
170
+ prompt.append({
171
+ "type": "image_url",
172
+ "image_url": {"url": encoded_image, "detail": "high"}
173
+ })
174
+ print(prompt)
156
175
  elif model_source == "Anthropic":
157
176
  prompt = [
158
177
  {
@@ -347,7 +366,7 @@ def cerad_drawn_score(
347
366
  categorized_data['score'] = categorized_data['diamond_4_sides'] + categorized_data['diamond_equal_sides'] + categorized_data['similar']
348
367
 
349
368
  categorized_data.loc[categorized_data['none'] == 1, 'score'] = 0
350
- categorized_data.loc[(categorized_data['diamond_square'] == 1) & (categorized_data['score'] == 0), 'score'] = 2
369
+ #categorized_data.loc[(categorized_data['diamond_square'] == 1) & (categorized_data['score'] == 0), 'score'] = 2
351
370
 
352
371
  elif shape == "rectangles" or shape == "overlapping rectangles":
353
372
 
catllm/__about__.py CHANGED
@@ -1,7 +1,7 @@
1
1
  # SPDX-FileCopyrightText: 2025-present Christopher Soria <chrissoria@berkeley.edu>
2
2
  #
3
3
  # SPDX-License-Identifier: MIT
4
- __version__ = "0.0.33"
4
+ __version__ = "0.0.34"
5
5
  __author__ = "Chris Soria"
6
6
  __email__ = "chrissoria@berkeley.edu"
7
7
  __title__ = "cat-llm"
catllm/image_functions.py CHANGED
@@ -4,7 +4,6 @@ def image_multi_class(
4
4
  image_input,
5
5
  categories,
6
6
  api_key,
7
- columns="numbered",
8
7
  user_model="gpt-4o",
9
8
  creativity=0,
10
9
  to_csv=False,
@@ -508,7 +507,6 @@ def image_features(
508
507
  image_input,
509
508
  features_to_extract,
510
509
  api_key,
511
- columns="numbered",
512
510
  user_model="gpt-4o-2024-11-20",
513
511
  creativity=0,
514
512
  to_csv=False,
@@ -1,9 +0,0 @@
1
- catllm/CERAD_functions.py,sha256=fiSiBnCcFgNp5XmGhZULnToEoMyP5z6JMcH-aWC8q5o,18787
2
- catllm/__about__.py,sha256=QD4n_jc9pZ_DH4rnRx892q9STG4YDuOKqS8li05uQnw,404
3
- catllm/__init__.py,sha256=BpAG8nPhM3ZQRd0WqkubI_36-VCOs4eCYtGVgzz48Bs,337
4
- catllm/image_functions.py,sha256=9e4V1IEMZUFrH00yEjyowwTUKeXWGsln0U1iQ-DELTY,31359
5
- catllm/text_functions.py,sha256=K6oetWYk25PwsllWSZP4cFrz7kyxJg0plPRvpmQkCsU,16846
6
- cat_llm-0.0.33.dist-info/METADATA,sha256=XiSskbffmKcIABIrm7vnJqJmDgZGOh6Qi_JABdU5Uls,9260
7
- cat_llm-0.0.33.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87
8
- cat_llm-0.0.33.dist-info/licenses/LICENSE,sha256=wJLsvOr6lrFUDcoPXExa01HOKFWrS3JC9f0RudRw8uw,1075
9
- cat_llm-0.0.33.dist-info/RECORD,,
@@ -1,21 +0,0 @@
1
- MIT License
2
-
3
- Copyright (c) 2025 Christopher Soria
4
-
5
- Permission is hereby granted, free of charge, to any person obtaining a copy
6
- of this software and associated documentation files (the "Software"), to deal
7
- in the Software without restriction, including without limitation the rights
8
- to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
- copies of the Software, and to permit persons to whom the Software is
10
- furnished to do so, subject to the following conditions:
11
-
12
- The above copyright notice and this permission notice shall be included in all
13
- copies or substantial portions of the Software.
14
-
15
- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
- IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
- FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
- AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
- LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
- OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
- SOFTWARE.