themefinder 0.7.0__tar.gz → 0.7.1__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of themefinder might be problematic. Click here for more details.

Files changed (20) hide show
  1. {themefinder-0.7.0 → themefinder-0.7.1}/PKG-INFO +2 -2
  2. {themefinder-0.7.0 → themefinder-0.7.1}/pyproject.toml +2 -2
  3. themefinder-0.7.1/src/themefinder/prompts/detail_detection.txt +31 -0
  4. themefinder-0.7.0/src/themefinder/prompts/detail_detection.txt +0 -19
  5. {themefinder-0.7.0 → themefinder-0.7.1}/LICENCE +0 -0
  6. {themefinder-0.7.0 → themefinder-0.7.1}/README.md +0 -0
  7. {themefinder-0.7.0 → themefinder-0.7.1}/src/themefinder/__init__.py +0 -0
  8. {themefinder-0.7.0 → themefinder-0.7.1}/src/themefinder/core.py +0 -0
  9. {themefinder-0.7.0 → themefinder-0.7.1}/src/themefinder/llm_batch_processor.py +0 -0
  10. {themefinder-0.7.0 → themefinder-0.7.1}/src/themefinder/models.py +0 -0
  11. {themefinder-0.7.0 → themefinder-0.7.1}/src/themefinder/prompts/agentic_theme_clustering.txt +0 -0
  12. {themefinder-0.7.0 → themefinder-0.7.1}/src/themefinder/prompts/consultation_system_prompt.txt +0 -0
  13. {themefinder-0.7.0 → themefinder-0.7.1}/src/themefinder/prompts/sentiment_analysis.txt +0 -0
  14. {themefinder-0.7.0 → themefinder-0.7.1}/src/themefinder/prompts/theme_condensation.txt +0 -0
  15. {themefinder-0.7.0 → themefinder-0.7.1}/src/themefinder/prompts/theme_generation.txt +0 -0
  16. {themefinder-0.7.0 → themefinder-0.7.1}/src/themefinder/prompts/theme_mapping.txt +0 -0
  17. {themefinder-0.7.0 → themefinder-0.7.1}/src/themefinder/prompts/theme_refinement.txt +0 -0
  18. {themefinder-0.7.0 → themefinder-0.7.1}/src/themefinder/prompts/theme_target_alignment.txt +0 -0
  19. {themefinder-0.7.0 → themefinder-0.7.1}/src/themefinder/theme_clustering_agent.py +0 -0
  20. {themefinder-0.7.0 → themefinder-0.7.1}/src/themefinder/themefinder_logging.py +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.3
2
2
  Name: themefinder
3
- Version: 0.7.0
3
+ Version: 0.7.1
4
4
  Summary: A topic modelling Python package designed for analysing one-to-many question-answer data eg free-text survey responses.
5
5
  License: MIT
6
6
  Author: i.AI
@@ -17,7 +17,7 @@ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
17
17
  Classifier: Topic :: Text Processing :: Linguistic
18
18
  Requires-Dist: boto3 (>=1.29,<2.0)
19
19
  Requires-Dist: langchain
20
- Requires-Dist: langchain-openai (==0.1.17)
20
+ Requires-Dist: langchain-openai
21
21
  Requires-Dist: langfuse (==2.29.1)
22
22
  Requires-Dist: openpyxl (>=3.1.5,<4.0.0)
23
23
  Requires-Dist: pandas (>=2.2.2,<3.0.0)
@@ -1,6 +1,6 @@
1
1
  [tool.poetry]
2
2
  name = "themefinder"
3
- version = "0.7.0"
3
+ version = "0.7.1"
4
4
  description = "A topic modelling Python package designed for analysing one-to-many question-answer data eg free-text survey responses."
5
5
  authors = ["i.AI <packages@cabinetoffice.gov.uk>"]
6
6
  packages = [{include = "themefinder", from = "src"}]
@@ -19,7 +19,7 @@ classifiers = [
19
19
  [tool.poetry.dependencies]
20
20
  python = ">=3.10,<3.13"
21
21
  langchain = "*"
22
- langchain-openai = "0.1.17"
22
+ langchain-openai = "*"
23
23
  pandas = "^2.2.2"
24
24
  python-dotenv = "^1.0.1"
25
25
  langfuse = "2.29.1"
@@ -0,0 +1,31 @@
1
+ {system_prompt}
2
+
3
+ You will receive a list of RESPONSES, each containing a response_id and a response.
4
+ Your job is to analyze each response to the QUESTION below and decide if a response contains rich evidence.
5
+ You MUST include every response ID in the output.
6
+
7
+ A response is evidence-rich only if it satisfies both of the following:
8
+
9
+ Relevance and depth:
10
+     - It clearly answers the question
11
+     - AND provides insights that go beyond generic opinion, such as nuanced reasoning, contextual explanation, or argumentation that could inform decision-making
12
+
13
+ Substantive evidence, including at least one of:
14
+     - Specific, verifiable facts or data (e.g., statistics, dates, named reports or studies)
15
+     - Concrete, illustrative examples that clearly support a broader claim
16
+     - Detailed personal or professional experiences that include contextual information (e.g., roles, locations, timelines)
17
+
18
+ Do NOT classify a response as evidence-rich if it:
19
+ - Uses vague or general language with no supporting detail
20
+ - Restates commonly known points without adding new information
21
+ - Shares personal anecdotes without sufficient context or a clear takeaway
22
+
23
+ Before answering, ask: Would this response provide useful input to someone drafting policy, beyond what is already commonly known or expected?
24
+
25
+ For each response, determine:
26
+ EVIDENCE_RICH - does the response contain significant evidence as defined above?
27
+ Choose one from ['YES', 'NO']
28
+
29
+
30
+ QUESTION: \n {question}
31
+ RESPONSES: \n {responses}
@@ -1,19 +0,0 @@
1
- {system_prompt}
2
-
3
- You will receive a list of RESPONSES, each containing a response_id and a response.
4
- Your job is to analyze each response to the QUESTION below and decide if a response contains rich evidence.
5
- You MUST include every response ID in the output.
6
-
7
- Evidence-rich responses contain one or more of the following:
8
- - Specific facts or figures that shed new light on the issue (e.g., statistics, percentages, measurements, dates)
9
- - Concrete examples and specific insights that could inform decision-making
10
- - Detailed personal or professional experiences with clear contextual information or specific incidents
11
- In addition to the above an evidence rich response should answer the question and provide deeper insights than an average response.
12
-
13
- For each response, determine:
14
- EVIDENCE_RICH - does the response contain significant evidence as defined above?
15
- Choose one from ['YES', 'NO']
16
-
17
-
18
- QUESTION: \n {question}
19
- RESPONSES: \n {responses}
File without changes
File without changes