olca 0.2.62__tar.gz → 0.2.64__tar.gz

Sign up to get free protection for your applications and to get access to all the features.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: olca
3
- Version: 0.2.62
3
+ Version: 0.2.64
4
4
  Summary: A Python package for experimental usage of Langchain and Human-in-the-Loop
5
5
  Home-page: https://github.com/jgwill/olca
6
6
  Author: Jean GUillaume ISabelle
@@ -375,7 +375,6 @@ oLCa is a Python package that provides a CLI tool for Experimenting Langchain wi
375
375
 
376
376
  ## Features
377
377
 
378
-
379
378
  ## Installation
380
379
 
381
380
  To install the package, you can use pip:
@@ -384,11 +383,31 @@ To install the package, you can use pip:
384
383
  pip install olca
385
384
  ```
386
385
 
386
+ ## Quick Start
387
+
388
+ 1. Install the package:
389
+ ```bash
390
+ pip install olca
391
+ ```
392
+ 2. Initialize configuration:
393
+ ```bash
394
+ olca init
395
+ ```
396
+ 3. Run the CLI with tracing:
397
+ ```bash
398
+ olca -T
399
+ ```
400
+
401
+ ## Environment Variables
402
+
403
+ Set LANGFUSE_PUBLIC_KEY, LANGFUSE_SECRET_KEY, and LANGFUSE_HOST for tracing with Langfuse.
404
+ Set LANGCHAIN_API_KEY for LangSmith tracing.
405
+ Optionally, set OPENAI_API_KEY for OpenAI usage.
406
+
387
407
  ## Usage
388
408
 
389
409
  ### CLI Tool
390
410
 
391
-
392
411
  #### Help
393
412
 
394
413
  To see the available commands and options, use the `--help` flag:
@@ -397,8 +416,6 @@ To see the available commands and options, use the `--help` flag:
397
416
  olca2 --help
398
417
  ```
399
418
 
400
-
401
-
402
419
  ## fusewill
403
420
 
404
421
  The `fusewill` command is a CLI tool that provides functionalities for interacting with Langfuse, including tracing, dataset management, and prompt operations.
@@ -407,12 +424,11 @@ The `fusewill` command is a CLI tool that provides functionalities for interacti
407
424
 
408
425
  To see the available commands and options for `fusewill`, use the `--help` flag:
409
426
 
410
-
411
427
  ----
428
+
412
429
  IMPORTED README from olca1
413
430
  ----
414
431
 
415
-
416
432
  ### Olca
417
433
 
418
434
  The olca.py script is designed to function as a command-line interface (CLI) agent. It performs various tasks based on given inputs and files present in the directory. The agent is capable of creating directories, producing reports, and writing instructions for self-learning. It operates within a GitHub repository environment and can commit and push changes if provided with an issue ID. The script ensures that it logs its internal actions and follows specific guidelines for handling tasks and reporting, without modifying certain configuration files or checking out branches unless explicitly instructed.
@@ -4,7 +4,6 @@ oLCa is a Python package that provides a CLI tool for Experimenting Langchain wi
4
4
 
5
5
  ## Features
6
6
 
7
-
8
7
  ## Installation
9
8
 
10
9
  To install the package, you can use pip:
@@ -13,11 +12,31 @@ To install the package, you can use pip:
13
12
  pip install olca
14
13
  ```
15
14
 
15
+ ## Quick Start
16
+
17
+ 1. Install the package:
18
+ ```bash
19
+ pip install olca
20
+ ```
21
+ 2. Initialize configuration:
22
+ ```bash
23
+ olca init
24
+ ```
25
+ 3. Run the CLI with tracing:
26
+ ```bash
27
+ olca -T
28
+ ```
29
+
30
+ ## Environment Variables
31
+
32
+ Set LANGFUSE_PUBLIC_KEY, LANGFUSE_SECRET_KEY, and LANGFUSE_HOST for tracing with Langfuse.
33
+ Set LANGCHAIN_API_KEY for LangSmith tracing.
34
+ Optionally, set OPENAI_API_KEY for OpenAI usage.
35
+
16
36
  ## Usage
17
37
 
18
38
  ### CLI Tool
19
39
 
20
-
21
40
  #### Help
22
41
 
23
42
  To see the available commands and options, use the `--help` flag:
@@ -26,8 +45,6 @@ To see the available commands and options, use the `--help` flag:
26
45
  olca2 --help
27
46
  ```
28
47
 
29
-
30
-
31
48
  ## fusewill
32
49
 
33
50
  The `fusewill` command is a CLI tool that provides functionalities for interacting with Langfuse, including tracing, dataset management, and prompt operations.
@@ -36,12 +53,11 @@ The `fusewill` command is a CLI tool that provides functionalities for interacti
36
53
 
37
54
  To see the available commands and options for `fusewill`, use the `--help` flag:
38
55
 
39
-
40
56
  ----
57
+
41
58
  IMPORTED README from olca1
42
59
  ----
43
60
 
44
-
45
61
  ### Olca
46
62
 
47
63
  The olca.py script is designed to function as a command-line interface (CLI) agent. It performs various tasks based on given inputs and files present in the directory. The agent is capable of creating directories, producing reports, and writing instructions for self-learning. It operates within a GitHub repository environment and can commit and push changes if provided with an issue ID. The script ensures that it logs its internal actions and follows specific guidelines for handling tasks and reporting, without modifying certain configuration files or checking out branches unless explicitly instructed.
@@ -123,7 +123,7 @@ def list_traces_by_score(score_name, min_value=None, max_value=None, limit=100):
123
123
  return filtered_traces
124
124
 
125
125
  def add_score_to_a_trace(trace_id, generation_id, name, value, data_type="NUMERIC", comment=""):
126
- langfuse.score(
126
+ result_add_score_to_a_trace=langfuse.score(
127
127
  trace_id=trace_id,
128
128
  observation_id=generation_id,
129
129
  name=name,
@@ -131,11 +131,16 @@ def add_score_to_a_trace(trace_id, generation_id, name, value, data_type="NUMERI
131
131
  data_type=data_type,
132
132
  comment=comment
133
133
  )
134
+ return result_add_score_to_a_trace
134
135
 
135
136
  def create_score(name, data_type, description="", possible_values=None, min_value=None, max_value=None):
136
- langfuse.score(
137
+ placeholder_value = ""
138
+ if data_type.upper() == "BOOLEAN":
139
+ placeholder_value = "1"
140
+
141
+ resulting_score = langfuse.score(
137
142
  name=name,
138
- value="", # Provide a placeholder value
143
+ value=placeholder_value,
139
144
  data_type=data_type,
140
145
  description=description,
141
146
  # For categorical:
@@ -143,11 +148,17 @@ def create_score(name, data_type, description="", possible_values=None, min_valu
143
148
  # For numeric:
144
149
  **({"min_value": min_value, "max_value": max_value} if data_type == "NUMERIC" and min_value is not None and max_value is not None else {})
145
150
  )
151
+ return resulting_score
146
152
 
147
153
  def score_exists(name):
148
- scores = langfuse.get_scores()
149
- for score in scores.data:
150
- if score.name == name:
154
+ """
155
+ Check if a score with the given name exists by calling list_scores().
156
+ """
157
+ scores = list_scores()
158
+ if not scores or scores.get('meta', {}).get('totalItems', 0) == 0:
159
+ return False
160
+ for sc in scores:
161
+ if sc.get("name") == name:
151
162
  return True
152
163
  return False
153
164
 
@@ -265,7 +276,7 @@ def fetch_all_traces(start_date=None, end_date=None):
265
276
 
266
277
  def export_traces(format='json', output_path=None, start_date=None, end_date=None):
267
278
  """
268
- Export traces to a given format (json or csv).
279
+ Export traces along with their full score details.
269
280
  """
270
281
  try:
271
282
  all_traces = fetch_all_traces(start_date=start_date, end_date=end_date)
@@ -277,9 +288,23 @@ def export_traces(format='json', output_path=None, start_date=None, end_date=Non
277
288
  if output_dir and not os.path.exists(output_dir):
278
289
  os.makedirs(output_dir)
279
290
 
291
+ all_scores=list_scores()
292
+ exported_data = []
293
+ for t in all_traces:
294
+ # fetch full score details
295
+ score_details = []
296
+ if t.scores:
297
+ for s_id in t.scores:
298
+ s_detail = get_score_by_id(s_id)
299
+ if s_detail:
300
+ score_details.append(s_detail)
301
+ t_dict = t.__dict__
302
+ t_dict["score_details"] = score_details
303
+ exported_data.append(t_dict)
304
+
280
305
  if format == 'json':
281
306
  with open(output_path, 'w') as f:
282
- json.dump([t.__dict__ for t in all_traces], f, indent=2, default=str)
307
+ json.dump(exported_data, f, indent=2, default=str)
283
308
  elif format == 'csv':
284
309
  import csv
285
310
  fieldnames = ['id', 'name', 'input', 'output', 'createdAt']
@@ -299,7 +324,7 @@ def export_traces(format='json', output_path=None, start_date=None, end_date=Non
299
324
  # Sort traces by createdAt to ensure the oldest date is first
300
325
  all_traces.sort(key=lambda x: x.createdAt)
301
326
  first_trace_date = datetime.datetime.fromisoformat(all_traces[0].createdAt.replace('Z', '+00:00')).strftime('%Y-%m-%d %H:%M:%S')
302
- last_trace_date = datetime.datetime.fromisoformat(all_traces[-1].CreatedAt.replace('Z', '+00:00')).strftime('%Y-%m-%d %H:%M:%S')
327
+ last_trace_date = datetime.datetime.fromisoformat(all_traces[-1].createdAt.replace('Z', '+00:00')).strftime('%Y-%m-%d %H:%M:%S')
303
328
  print(f"Traces exported to {output_path}. Total traces exported: {len(all_traces)}")
304
329
  print(f"Date range: {first_trace_date} to {last_trace_date}")
305
330
  else:
@@ -307,9 +332,29 @@ def export_traces(format='json', output_path=None, start_date=None, end_date=Non
307
332
  except Exception as e:
308
333
  print(f"Error exporting traces: {e}")
309
334
 
335
+ def create_new_trace(name, input_text, output_text, session_id=None, metadata=None, timestamp=None):
336
+ """
337
+ Creates a new trace with an optional timestamp.
338
+ """
339
+ parsed_timestamp = None
340
+ if timestamp:
341
+ try:
342
+ parsed_timestamp = datetime.datetime.fromisoformat(timestamp.replace('Z', '+00:00'))
343
+ except ValueError:
344
+ pass
345
+ trace_created=langfuse.trace(
346
+ name=name,
347
+ input=input_text,
348
+ output=output_text,
349
+ session_id=session_id,
350
+ metadata=metadata,
351
+ timestamp=parsed_timestamp
352
+ )
353
+ return trace_created
354
+
310
355
  def import_traces(format='json', input_path=None):
311
356
  """
312
- Import traces from a given file (json or csv) into Langfuse.
357
+ Import traces. If any score doesn't exist, create it and attach it to the trace.
313
358
  """
314
359
  if not input_path:
315
360
  print("No input file provided for importing traces.")
@@ -327,14 +372,51 @@ def import_traces(format='json', input_path=None):
327
372
  for row in reader:
328
373
  data.append(row)
329
374
 
375
+ if isinstance(data, dict):
376
+ data = [data]
377
+
330
378
  # Create new traces in Langfuse from data
331
379
  for item in data:
332
- langfuse.create_trace(
380
+ trace_timestamp = item.get('timestamp') or item.get('createdAt')
381
+ new_trace = create_new_trace(
333
382
  name=item.get('name', 'Imported Trace'),
334
- input=item.get('input', ''),
335
- output=item.get('output', '')
336
- # pass other fields as needed
383
+ input_text=item.get('input', ''),
384
+ output_text=item.get('output', ''),
385
+ session_id=item.get('session_id'),
386
+ metadata=item.get('metadata'),
387
+ timestamp=trace_timestamp
337
388
  )
389
+ # handle imported scores
390
+ for s_detail in item.get("score_details", []):
391
+ score_name = s_detail["name"]
392
+ score_value = str(s_detail.get("value", "0"))
393
+ score_data_type = s_detail.get("dataType", "NUMERIC")
394
+ score_comment = s_detail.get("comment", "")
395
+ score_description = s_detail.get("description", "")
396
+ score_possible_values = s_detail.get("possible_values")
397
+ minimum_score_value = s_detail.get("min_value")
398
+ max_score_value = s_detail.get("max_value")
399
+ if not score_exists(score_name):
400
+ resulting_score=create_score(
401
+ name=score_name,
402
+ data_type=score_data_type,
403
+ description=score_description,
404
+ possible_values=score_possible_values,
405
+ min_value=minimum_score_value,
406
+ max_value=max_score_value
407
+
408
+ )
409
+ result_add_score_to_a_trace=add_score_to_a_trace(
410
+ trace_id=new_trace.id,
411
+ generation_id=None, # Replace as needed if your data includes observation IDs
412
+ name=score_name,
413
+ value=score_value,
414
+ data_type=score_data_type,
415
+ comment=score_comment
416
+ )
417
+ print(f"Added score {score_name} to trace {new_trace.id}")
418
+ print(result_add_score_to_a_trace)
419
+
338
420
  print(f"Imported {len(data)} traces from {input_path}")
339
421
  except Exception as e:
340
422
  print(f"Error importing traces: {e}")
@@ -0,0 +1,145 @@
1
+ #!/bin/env python
2
+ #@STCGoal Create an agent callable with command line
3
+ #@STCIssue The original script was not callable from the command line.
4
+
5
+ from langchain import hub
6
+ from langchain.agents import AgentExecutor, create_react_agent
7
+ from langchain_community.agent_toolkits.load_tools import load_tools
8
+ from langchain_openai import ChatOpenAI
9
+ import argparse
10
+ import os
11
+ import json
12
+ import tlid
13
+ import warnings
14
+
15
+ warnings.filterwarnings("ignore", message="The function `loads` is in beta. It is actively being worked on, so the API may change.")
16
+ DEBUG_MODE=False
17
+
18
+ prompt=None
19
+ def _create_agent_tools(tool_name = "arxiv",temperature = 0.0,tool_hub_tag = "jgwill/react",chatbot_model = "gpt-4o-mini"):
20
+ global prompt
21
+
22
+ llm = ChatOpenAI(temperature=temperature,name=chatbot_model)
23
+
24
+ tools = load_tools(
25
+ [tool_name],
26
+ )
27
+ if DEBUG_MODE:
28
+
29
+ print("Tools:")
30
+ print(tools)
31
+ print("--------------")
32
+ prompt = hub.pull(tool_hub_tag)
33
+
34
+ if DEBUG_MODE:
35
+ print("prompt:")
36
+ print(prompt)
37
+ print("--------------")
38
+
39
+ agent = create_react_agent(llm, tools, prompt)
40
+
41
+ if DEBUG_MODE:
42
+ print("Agent:")
43
+ print(agent)
44
+ print("--------------")
45
+ return tools,agent
46
+
47
+
48
+ def create_agent_executor(tools, agent) -> AgentExecutor:
49
+ return AgentExecutor(agent=agent, tools=tools, verbose=True,handle_parsing_errors=True)
50
+
51
+
52
+
53
+
54
+ # tools, agent=_create_agent_tools()
55
+
56
+ # agent_executor=create_agent_executor(tools, agent)
57
+ def ask_agent(input_request, agent_executor=None,tool_hub_tag = "jgwill/react",chatbot_model = "gpt-4o-mini"):
58
+ if agent_executor is None:
59
+ tools, agent = _create_agent_tools(tool_hub_tag=tool_hub_tag)
60
+ agent_executor = create_agent_executor(tools, agent)
61
+
62
+ resp = agent_executor.invoke(
63
+ {
64
+ "input": input_request,
65
+ }
66
+ )
67
+
68
+ return resp
69
+
70
+ def serialize_response_to_json(resp):
71
+ return json.dumps(resp)
72
+
73
+ def serialize_response_to_json_file(resp, filename):
74
+ json_str=serialize_response_to_json(resp)
75
+ with open(filename, 'w') as f:
76
+ f.write(json_str)
77
+
78
+ def serialize_response_to_markdown(o):
79
+ output=o["output"]["output"]
80
+ string_md="# Output\n"
81
+ #string_md+=f"## Model\n{o['model']}\n"
82
+ #string_md+=f"## Prompt\n{o['prompt']['prompt']}\n"
83
+ string_md+=f"## Input\n{o['input']}\n"
84
+ string_md+=f"## Output\n{output}\n"
85
+ return string_md
86
+
87
+ def serialize_response_to_markdown_file(o, filename):
88
+ string_md=serialize_response_to_markdown(o)
89
+ with open(filename, 'w') as f:
90
+ f.write(string_md)
91
+
92
+
93
+ def main():
94
+ global prompt
95
+ args = parse_cli_arguments()
96
+
97
+ input_request = args.input
98
+ tool_hub_tag = "jgwill/react" if args.hub_tag is None else args.hub_tag
99
+ resp = ask_agent(input_request,tool_hub_tag=tool_hub_tag,chatbot_model=args.chatbot_model)
100
+ outdir=os.path.join(os.getcwd(),"output")
101
+ os.makedirs(outdir, exist_ok=True)
102
+ out_filename = f"{args.prefix}output-{tlid.get_minutes()}.json"
103
+ outfile=os.path.join(outdir,out_filename)
104
+ o={}
105
+ prompt_dict = {
106
+ "prompt": str(prompt)
107
+ }
108
+ o["model"]=args.chatbot_model
109
+ o["prompt"]=prompt_dict
110
+ o["input"]=input_request
111
+ o["output"]=resp
112
+ serialize_response_to_json_file(o, outfile)
113
+ serialize_response_to_markdown_file(o, outfile.replace(".json",".md"))
114
+ VERBOSE_RESULT=False
115
+ if VERBOSE_RESULT:
116
+ print("==================================")
117
+ print(input_request)
118
+ print("============RESPONSE============")
119
+ print(resp)
120
+ print("==================================")
121
+ print("=============INPUT Request=====================")
122
+ print(input_request)
123
+ print("==================================")
124
+ print("============OUTPUT============")
125
+ output=resp["output"]
126
+ print(output)
127
+
128
+ def parse_cli_arguments():
129
+ parser = argparse.ArgumentParser(description='Process Input request for pattern search.')
130
+ parser.add_argument('-I','--input', type=str,
131
+ help='an input request for the searched article')
132
+ ##--hub_tag
133
+ parser.add_argument('-H','-ht','--hub_tag', type=str,
134
+ help='The hub tag for the process',default="jgwill/react")
135
+ #--chatbot_model
136
+ parser.add_argument('-M','-m','--chatbot_model', type=str,
137
+ help='a chatbot model for the processing',default="gpt-4o-mini")
138
+ #--prefix
139
+ parser.add_argument('-P','-p','--prefix', type=str,
140
+ help='a file prefix for output',default="arxiv-")
141
+ args = parser.parse_args()
142
+ return args
143
+
144
+ if __name__ == "__main__":
145
+ main()
@@ -8,7 +8,7 @@ import argparse
8
8
  import yaml
9
9
  from olca.utils import load_environment, initialize_langfuse
10
10
  from olca.tracing import TracingManager
11
- from olca.olcahelper import setup_required_directories, initialize_config_file
11
+ from olca.olcahelper import setup_required_directories, initialize_config_file, prepare_input
12
12
  from prompts import SYSTEM_PROMPT_APPEND, HUMAN_APPEND_PROMPT
13
13
 
14
14
  #jgwill/olca1
@@ -128,18 +128,6 @@ def print_stream(stream):
128
128
  except Exception as e:
129
129
  print(s)
130
130
 
131
- def prepare_input(user_input, system_instructions,append_prompt=True, human=False):
132
- appended_prompt = system_instructions + SYSTEM_PROMPT_APPEND if append_prompt else system_instructions
133
- appended_prompt = appended_prompt + HUMAN_APPEND_PROMPT if human else appended_prompt
134
-
135
- inputs = {"messages": [
136
- ("system",
137
- appended_prompt),
138
- ("user", user_input )
139
- ]}
140
-
141
- return inputs,system_instructions,user_input
142
-
143
131
  OLCA_DESCRIPTION = "OlCA (Orpheus Langchain CLI Assistant) (very Experimental and dangerous)"
144
132
  OLCA_EPILOG = "For more information: https://github.com/jgwill/orpheuspypractice/wiki/olca"
145
133
  OLCA_USAGE="olca [-D] [-H] [-M] [-T] [init] [-y]"
@@ -65,3 +65,13 @@ def initialize_config_file():
65
65
  except KeyboardInterrupt:
66
66
  print("\nConfiguration canceled by user.")
67
67
  exit(0)
68
+
69
+ def prepare_input(user_input, system_instructions, append_prompt=True, human=False):
70
+ from olca.prompts import SYSTEM_PROMPT_APPEND, HUMAN_APPEND_PROMPT
71
+ appended_prompt = system_instructions + SYSTEM_PROMPT_APPEND if append_prompt else system_instructions
72
+ appended_prompt = appended_prompt + HUMAN_APPEND_PROMPT if human else appended_prompt
73
+ inputs = {"messages": [
74
+ ("system", appended_prompt),
75
+ ("user", user_input)
76
+ ]}
77
+ return inputs, system_instructions, user_input
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: olca
3
- Version: 0.2.62
3
+ Version: 0.2.64
4
4
  Summary: A Python package for experimental usage of Langchain and Human-in-the-Loop
5
5
  Home-page: https://github.com/jgwill/olca
6
6
  Author: Jean GUillaume ISabelle
@@ -375,7 +375,6 @@ oLCa is a Python package that provides a CLI tool for Experimenting Langchain wi
375
375
 
376
376
  ## Features
377
377
 
378
-
379
378
  ## Installation
380
379
 
381
380
  To install the package, you can use pip:
@@ -384,11 +383,31 @@ To install the package, you can use pip:
384
383
  pip install olca
385
384
  ```
386
385
 
386
+ ## Quick Start
387
+
388
+ 1. Install the package:
389
+ ```bash
390
+ pip install olca
391
+ ```
392
+ 2. Initialize configuration:
393
+ ```bash
394
+ olca init
395
+ ```
396
+ 3. Run the CLI with tracing:
397
+ ```bash
398
+ olca -T
399
+ ```
400
+
401
+ ## Environment Variables
402
+
403
+ Set LANGFUSE_PUBLIC_KEY, LANGFUSE_SECRET_KEY, and LANGFUSE_HOST for tracing with Langfuse.
404
+ Set LANGCHAIN_API_KEY for LangSmith tracing.
405
+ Optionally, set OPENAI_API_KEY for OpenAI usage.
406
+
387
407
  ## Usage
388
408
 
389
409
  ### CLI Tool
390
410
 
391
-
392
411
  #### Help
393
412
 
394
413
  To see the available commands and options, use the `--help` flag:
@@ -397,8 +416,6 @@ To see the available commands and options, use the `--help` flag:
397
416
  olca2 --help
398
417
  ```
399
418
 
400
-
401
-
402
419
  ## fusewill
403
420
 
404
421
  The `fusewill` command is a CLI tool that provides functionalities for interacting with Langfuse, including tracing, dataset management, and prompt operations.
@@ -407,12 +424,11 @@ The `fusewill` command is a CLI tool that provides functionalities for interacti
407
424
 
408
425
  To see the available commands and options for `fusewill`, use the `--help` flag:
409
426
 
410
-
411
427
  ----
428
+
412
429
  IMPORTED README from olca1
413
430
  ----
414
431
 
415
-
416
432
  ### Olca
417
433
 
418
434
  The olca.py script is designed to function as a command-line interface (CLI) agent. It performs various tasks based on given inputs and files present in the directory. The agent is capable of creating directories, producing reports, and writing instructions for self-learning. It operates within a GitHub repository environment and can commit and push changes if provided with an issue ID. The script ensures that it logs its internal actions and follows specific guidelines for handling tasks and reporting, without modifying certain configuration files or checking out branches unless explicitly instructed.
@@ -5,6 +5,7 @@ setup.py
5
5
  olca/__init__.py
6
6
  olca/fusewill_cli.py
7
7
  olca/fusewill_utils.py
8
+ olca/oiv.py
8
9
  olca/olcacli.py
9
10
  olca/olcahelper.py
10
11
  olca/prompts.py
@@ -1,4 +1,5 @@
1
1
  [console_scripts]
2
2
  fusewill = olca.fusewill_cli:main
3
+ oiv = olca.oiv:main
3
4
  olca = olca.olcacli:main
4
5
  olca2 = olca.olcacli:main
@@ -7,7 +7,7 @@ build-backend = "setuptools.build_meta"
7
7
 
8
8
  [project]
9
9
  name = "olca"
10
- version = "0.2.62"
10
+ version = "0.2.64"
11
11
 
12
12
  description = "A Python package for experimental usage of Langchain and Human-in-the-Loop"
13
13
  readme = "README.md"
@@ -45,3 +45,5 @@ classifiers = [
45
45
  olca2 = "olca.olcacli:main"
46
46
  olca = "olca.olcacli:main"
47
47
  fusewill = "olca.fusewill_cli:main"
48
+ oiv = "olca.oiv:main"
49
+
@@ -2,7 +2,7 @@ from setuptools import setup, find_packages
2
2
 
3
3
  setup(
4
4
  name='olca',
5
- version = "0.2.62",
5
+ version = "0.2.64",
6
6
  author='Jean GUillaume ISabelle',
7
7
  author_email='jgi@jgwill.com',
8
8
  description='A Python package for experimenting with Langchain agent and interactivity in Terminal modalities.',
@@ -31,7 +31,8 @@ setup(
31
31
  'langchain-ollama',
32
32
  'langgraph',
33
33
  'llm',
34
- 'langgraph'
34
+ 'langgraph',
35
+ 'arxiv',
35
36
  ],
36
37
  entry_points={
37
38
  'console_scripts': [
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes