weco 0.1.7__tar.gz → 0.1.8__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -74,15 +74,12 @@ jobs:
74
74
  inputs: >-
75
75
  ./dist/*.tar.gz
76
76
  ./dist/*.whl
77
- - name: Debug Print github.ref_name
78
- run: >-
79
- echo "github.ref_name: ${{ github.ref_name }}"
80
77
  - name: Create GitHub Release
81
78
  env:
82
79
  GITHUB_TOKEN: ${{ github.token }}
83
80
  run: >-
84
81
  gh release create
85
- 'v0.1.7'
82
+ 'v0.1.8'
86
83
  --repo '${{ github.repository }}'
87
84
  --notes ""
88
85
  - name: Upload artifact signatures to GitHub Release
@@ -93,5 +90,5 @@ jobs:
93
90
  # sigstore-produced signatures and certificates.
94
91
  run: >-
95
92
  gh release upload
96
- 'v0.1.7' dist/**
93
+ 'v0.1.8' dist/**
97
94
  --repo '${{ github.repository }}'
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: weco
3
- Version: 0.1.7
3
+ Version: 0.1.8
4
4
  Summary: A client facing API for interacting with the WeCo AI function builder service.
5
5
  Author-email: WeCo AI Team <dhruv@weco.ai>
6
6
  License: MIT
@@ -52,12 +52,16 @@ pip install weco
52
52
  ```
53
53
 
54
54
  ## Features
55
+ - Synchronous & Asynchronous client.
56
+ - Batch API
57
+ - Multimodality (Language & Vision)
58
+ - Interpretability (view the reasoning behind outputs)
59
+
60
+
61
+ ## What We Offer
55
62
 
56
63
  - The **build** function enables quick and easy prototyping of new functions via LLMs through just natural language. We encourage users to do this through our [web console](https://weco-app.vercel.app/function) for maximum control and ease of use, however, you can also do this through our API as shown in [here](examples/cookbook.ipynb).
57
64
  - The **query** function allows you to test and use the newly created function in your own code.
58
- - We offer asynchronous versions of the above clients.
59
- - We provide a **batch_query** functions that allows users to batch functions for various inputs as well as multiple inputs for the same function in a query. This is helpful to make a large number of queries more efficiently.
60
- - We also offer multimodality capabilities. You can now query our client with both **language** AND **vision** inputs!
61
65
 
62
66
  We provide both services in two ways:
63
67
  - `weco.WecoAI` client to be used when you want to maintain the same client service across a portion of code. This is better for dense service usage.
@@ -25,12 +25,16 @@ pip install weco
25
25
  ```
26
26
 
27
27
  ## Features
28
+ - Synchronous & Asynchronous client.
29
+ - Batch API
30
+ - Multimodality (Language & Vision)
31
+ - Interpretability (view the reasoning behind outputs)
32
+
33
+
34
+ ## What We Offer
28
35
 
29
36
  - The **build** function enables quick and easy prototyping of new functions via LLMs through just natural language. We encourage users to do this through our [web console](https://weco-app.vercel.app/function) for maximum control and ease of use, however, you can also do this through our API as shown in [here](examples/cookbook.ipynb).
30
37
  - The **query** function allows you to test and use the newly created function in your own code.
31
- - We offer asynchronous versions of the above clients.
32
- - We provide a **batch_query** functions that allows users to batch functions for various inputs as well as multiple inputs for the same function in a query. This is helpful to make a large number of queries more efficiently.
33
- - We also offer multimodality capabilities. You can now query our client with both **language** AND **vision** inputs!
34
38
 
35
39
  We provide both services in two ways:
36
40
  - `weco.WecoAI` client to be used when you want to maintain the same client service across a portion of code. This is better for dense service usage.
@@ -144,7 +144,7 @@
144
144
  "with open(\"/path/to/home_exterior.jpeg\", \"rb\") as img_file:\n",
145
145
  " my_home_exterior = base64.b64encode(img_file.read()).decode('utf-8')\n",
146
146
  "\n",
147
- "response = query(\n",
147
+ "query_response = query(\n",
148
148
  " fn_name=fn_name,\n",
149
149
  " text_input=request,\n",
150
150
  " images_input=[\n",
@@ -154,7 +154,10 @@
154
154
  " ]\n",
155
155
  ")\n",
156
156
  "\n",
157
- "print(response)"
157
+ "for key, value in query_response[\"output\"].items(): print(f\"{key}: {value}\")\n",
158
+ "print(f\"Input Tokens: {query_response['in_tokens']}\")\n",
159
+ "print(f\"Output Tokens: {query_response['out_tokens']}\")\n",
160
+ "print(f\"Latency: {query_response['latency_ms']} ms\")"
158
161
  ]
159
162
  },
160
163
  {
@@ -214,7 +217,10 @@
214
217
  " fn_name=fn_name,\n",
215
218
  " text_input=\"I want to train a model to predict house prices using the Boston Housing dataset hosted on Kaggle.\"\n",
216
219
  ")\n",
217
- "for key, value in query_response.items(): print(f\"{key}: {value}\")"
220
+ "for key, value in query_response[\"output\"].items(): print(f\"{key}: {value}\")\n",
221
+ "print(f\"Input Tokens: {query_response['in_tokens']}\")\n",
222
+ "print(f\"Output Tokens: {query_response['out_tokens']}\")\n",
223
+ "print(f\"Latency: {query_response['latency_ms']} ms\")"
218
224
  ]
219
225
  },
220
226
  {
@@ -274,7 +280,12 @@
274
280
  "query_responses = batch_query(\n",
275
281
  " fn_names=fn_name,\n",
276
282
  " batch_inputs=[input_1, input_2]\n",
277
- ")"
283
+ ")\n",
284
+ "for i, query_response in enumerate(query_responses):\n",
285
+ " print(\"-\"*50)\n",
286
+ " print(f\"For input {i + 1}\")\n",
287
+ " for key, value in query_response[\"output\"].items(): print(f\"{key}: {value}\")\n",
288
+ " print(\"-\"*50)"
278
289
  ]
279
290
  },
280
291
  {
@@ -323,14 +334,49 @@
323
334
  " fn_name=fn_name,\n",
324
335
  " text_input=\"I want to train a model to predict house prices using the Boston Housing dataset hosted on Kaggle.\"\n",
325
336
  ")\n",
326
- "for key, value in query_response.items(): print(f\"{key}: {value}\")"
337
+ "for key, value in query_response[\"output\"].items(): print(f\"{key}: {value}\")\n",
338
+ "print(f\"Input Tokens: {query_response['in_tokens']}\")\n",
339
+ "print(f\"Output Tokens: {query_response['out_tokens']}\")\n",
340
+ "print(f\"Latency: {query_response['latency_ms']} ms\")"
327
341
  ]
328
342
  },
329
343
  {
330
344
  "cell_type": "markdown",
331
345
  "metadata": {},
332
346
  "source": [
333
- "## A/B Testing with Function Versions"
347
+ "## Interpretability"
348
+ ]
349
+ },
350
+ {
351
+ "cell_type": "markdown",
352
+ "metadata": {},
353
+ "source": [
354
+ "You can now understand why a model generated an output simply by passing `return_reasoning=True` at query time!"
355
+ ]
356
+ },
357
+ {
358
+ "cell_type": "code",
359
+ "execution_count": null,
360
+ "metadata": {},
361
+ "outputs": [],
362
+ "source": [
363
+ "from weco import build, query\n",
364
+ "\n",
365
+ "# Describe the task you want the function to perform\n",
366
+ "fn_name, fn_desc = build(task_description=task_description)\n",
367
+ "print(f\"AI Function {fn_name} built. This does the following - \\n{fn_desc}.\")\n",
368
+ "\n",
369
+ "# Query the function with a specific input\n",
370
+ "query_response = query(\n",
371
+ " fn_name=fn_name,\n",
372
+ " text_input=\"I want to train a model to predict house prices using the Boston Housing dataset hosted on Kaggle.\",\n",
373
+ " return_reasoning=True\n",
374
+ ")\n",
375
+ "for key, value in query_response[\"output\"].items(): print(f\"{key}: {value}\")\n",
376
+ "for i, step in enumerate(query_response[\"reasoning_steps\"]): print(f\"Step {i+1}: {step}\")\n",
377
+ "print(f\"Input Tokens: {query_response['in_tokens']}\")\n",
378
+ "print(f\"Output Tokens: {query_response['out_tokens']}\")\n",
379
+ "print(f\"Latency: {query_response['latency_ms']} ms\")"
334
380
  ]
335
381
  },
336
382
  {
@@ -10,7 +10,7 @@ authors = [
10
10
  ]
11
11
  description = "A client facing API for interacting with the WeCo AI function builder service."
12
12
  readme = "README.md"
13
- version = "0.1.7"
13
+ version = "0.1.8"
14
14
  license = {text = "MIT"}
15
15
  requires-python = ">=3.8"
16
16
  dependencies = ["asyncio", "httpx[http2]", "pillow"]
@@ -20,13 +20,14 @@ async def assert_query_response(query_response):
20
20
  assert isinstance(query_response["in_tokens"], int)
21
21
  assert isinstance(query_response["out_tokens"], int)
22
22
  assert isinstance(query_response["latency_ms"], float)
23
+ assert "reasoning_steps" not in query_response
23
24
 
24
25
 
25
26
  @pytest.fixture
26
27
  async def text_evaluator():
27
28
  fn_name, version_number, fn_desc = await abuild(
28
29
  task_description="Evaluate the sentiment of the given text. Provide a json object with 'sentiment' and 'explanation' keys.",
29
- multimodal=False
30
+ multimodal=False,
30
31
  )
31
32
  return fn_name, version_number, fn_desc
32
33
 
@@ -44,7 +45,7 @@ async def test_text_aquery(text_evaluator):
44
45
  async def image_evaluator():
45
46
  fn_name, version_number, fn_desc = await abuild(
46
47
  task_description="Describe the contents of the given images. Provide a json object with 'description' and 'objects' keys.",
47
- multimodal=True
48
+ multimodal=True,
48
49
  )
49
50
  return fn_name, version_number, fn_desc
50
51
 
@@ -69,7 +70,7 @@ async def test_image_aquery(image_evaluator):
69
70
  async def text_and_image_evaluator():
70
71
  fn_name, version_number, fn_desc = await abuild(
71
72
  task_description="Evaluate, solve and arrive at a numerical answer for the image provided. Provide a json object with 'answer' and 'explanation' keys.",
72
- multimodal=True
73
+ multimodal=True,
73
74
  )
74
75
  return fn_name, version_number, fn_desc
75
76
 
@@ -9,7 +9,7 @@ from weco import batch_query, build
9
9
  def ml_task_evaluator():
10
10
  fn_name, version_number, _ = build(
11
11
  task_description="I want to evaluate the feasibility of a machine learning task. Give me a json object with three keys - 'feasibility', 'justification', and 'suggestions'.",
12
- multimodal=False
12
+ multimodal=False,
13
13
  )
14
14
  return fn_name, version_number
15
15
 
@@ -18,7 +18,9 @@ def ml_task_evaluator():
18
18
  def ml_task_inputs():
19
19
  return [
20
20
  {"text_input": "I want to train a model to predict house prices using the Boston Housing dataset hosted on Kaggle."},
21
- {"text_input": "I want to train a model to classify digits using the MNIST dataset hosted on Kaggle using a Google Colab notebook."},
21
+ {
22
+ "text_input": "I want to train a model to classify digits using the MNIST dataset hosted on Kaggle using a Google Colab notebook."
23
+ },
22
24
  ]
23
25
 
24
26
 
@@ -26,7 +28,7 @@ def ml_task_inputs():
26
28
  def image_evaluator():
27
29
  fn_name, version_number, _ = build(
28
30
  task_description="Describe the contents of the given images. Provide a json object with 'description' and 'objects' keys.",
29
- multimodal=True
31
+ multimodal=True,
30
32
  )
31
33
  return fn_name, version_number
32
34
 
@@ -34,7 +36,11 @@ def image_evaluator():
34
36
  @pytest.fixture
35
37
  def image_inputs():
36
38
  return [
37
- {"images_input": ["https://www.integratedtreatmentservices.co.uk/wp-content/uploads/2013/12/Objects-of-Reference.jpg"]},
39
+ {
40
+ "images_input": [
41
+ "https://www.integratedtreatmentservices.co.uk/wp-content/uploads/2013/12/Objects-of-Reference.jpg"
42
+ ]
43
+ },
38
44
  {"images_input": ["https://t4.ftcdn.net/jpg/05/70/90/23/360_F_570902339_kNj1reH40GFXakTy98EmfiZHci2xvUCS.jpg"]},
39
45
  ]
40
46
 
@@ -70,6 +76,7 @@ def test_batch_query_image(image_evaluator, image_inputs):
70
76
  assert isinstance(query_response["in_tokens"], int)
71
77
  assert isinstance(query_response["out_tokens"], int)
72
78
  assert isinstance(query_response["latency_ms"], float)
79
+ assert "reasoning_steps" not in query_response
73
80
 
74
81
  output = query_response["output"]
75
- assert set(output.keys()) == {"description", "objects"}
82
+ assert set(output.keys()) == {"description", "objects"}
@@ -0,0 +1,54 @@
1
+ import pytest
2
+
3
+ from weco import build, query
4
+
5
+
6
+ def assert_query_response(query_response):
7
+ assert isinstance(query_response, dict)
8
+ assert isinstance(query_response["output"], dict)
9
+ assert isinstance(query_response["reasoning_steps"], list)
10
+ for step in query_response["reasoning_steps"]: assert isinstance(step, str)
11
+ assert isinstance(query_response["in_tokens"], int)
12
+ assert isinstance(query_response["out_tokens"], int)
13
+ assert isinstance(query_response["latency_ms"], float)
14
+
15
+
16
+ @pytest.fixture
17
+ def text_reasoning_evaluator():
18
+ fn_name, version_number, fn_desc = build(
19
+ task_description="Evaluate the sentiment of the given text. Provide a json object with 'sentiment' and 'explanation' keys.",
20
+ multimodal=False,
21
+ )
22
+ return fn_name, version_number, fn_desc
23
+
24
+
25
+ def test_text_reasoning_query(text_reasoning_evaluator):
26
+ fn_name, version_number, _ = text_reasoning_evaluator
27
+ query_response = query(fn_name=fn_name, version_number=version_number, text_input="I love this product!", return_reasoning=True)
28
+
29
+ assert_query_response(query_response)
30
+ assert set(query_response["output"].keys()) == {"sentiment", "explanation"}
31
+
32
+ @pytest.fixture
33
+ def vision_reasoning_evaluator():
34
+ fn_name, version_number, fn_desc = build(
35
+ task_description="Evaluate, solve and arrive at a numerical answer for the image provided. Perform any additional things if instructed. Provide a json object with 'answer' and 'explanation' keys.",
36
+ multimodal=True,
37
+ )
38
+ return fn_name, version_number, fn_desc
39
+
40
+
41
+ def test_vision_reasoning_query(vision_reasoning_evaluator):
42
+ fn_name, version_number, _ = vision_reasoning_evaluator
43
+ query_response = query(
44
+ fn_name=fn_name,
45
+ version_number=version_number,
46
+ text_input="Find x and y.",
47
+ images_input=[
48
+ "https://i.ytimg.com/vi/cblHUeq3bkE/hq720.jpg?sqp=-oaymwEhCK4FEIIDSFryq4qpAxMIARUAAAAAGAElAADIQj0AgKJD&rs=AOn4CLAKn3piY91QRCBzRgnzAPf7MPrjDQ"
49
+ ],
50
+ return_reasoning=True,
51
+ )
52
+
53
+ assert_query_response(query_response)
54
+ assert set(query_response["output"].keys()) == {"answer", "explanation"}
@@ -19,13 +19,14 @@ def assert_query_response(query_response):
19
19
  assert isinstance(query_response["in_tokens"], int)
20
20
  assert isinstance(query_response["out_tokens"], int)
21
21
  assert isinstance(query_response["latency_ms"], float)
22
+ assert "reasoning_steps" not in query_response
22
23
 
23
24
 
24
25
  @pytest.fixture
25
26
  def text_evaluator():
26
27
  fn_name, version_number, fn_desc = build(
27
28
  task_description="Evaluate the sentiment of the given text. Provide a json object with 'sentiment' and 'explanation' keys.",
28
- multimodal=False
29
+ multimodal=False,
29
30
  )
30
31
  return fn_name, version_number, fn_desc
31
32
 
@@ -42,7 +43,7 @@ def test_text_query(text_evaluator):
42
43
  def image_evaluator():
43
44
  fn_name, version_number, fn_desc = build(
44
45
  task_description="Describe the contents of the given images. Provide a json object with 'description' and 'objects' keys.",
45
- multimodal=True
46
+ multimodal=True,
46
47
  )
47
48
  return fn_name, version_number, fn_desc
48
49
 
@@ -66,7 +67,7 @@ def test_image_query(image_evaluator):
66
67
  def text_and_image_evaluator():
67
68
  fn_name, version_number, fn_desc = build(
68
69
  task_description="Evaluate, solve and arrive at a numerical answer for the image provided. Perform any additional things if instructed. Provide a json object with 'answer' and 'explanation' keys.",
69
- multimodal=True
70
+ multimodal=True,
70
71
  )
71
72
  return fn_name, version_number, fn_desc
72
73
 
@@ -31,7 +31,7 @@ class WecoAI:
31
31
  ----------
32
32
  api_key : str
33
33
  The API key used for authentication.
34
-
34
+
35
35
  timeout : float
36
36
  The timeout for the HTTP requests in seconds. Default is 120.0.
37
37
 
@@ -39,7 +39,7 @@ class WecoAI:
39
39
  Whether to use HTTP/2 protocol for the HTTP requests. Default is True.
40
40
  """
41
41
 
42
- def __init__(self, api_key: str = None, timeout: float = 120.0, http2: bool = True) -> None:
42
+ def __init__(self, api_key: Union[str, None] = None, timeout: float = 120.0, http2: bool = True) -> None:
43
43
  """Initializes the WecoAI client with the provided API key and base URL.
44
44
 
45
45
  Parameters
@@ -153,24 +153,29 @@ class WecoAI:
153
153
  for _warning in response.get("warnings", []):
154
154
  warnings.warn(_warning)
155
155
 
156
- return {
156
+ returned_response = {
157
157
  "output": response["response"],
158
158
  "in_tokens": response["num_input_tokens"],
159
159
  "out_tokens": response["num_output_tokens"],
160
160
  "latency_ms": response["latency_ms"],
161
161
  }
162
+ if "reasoning_steps" in response:
163
+ returned_response["reasoning_steps"] = response["reasoning_steps"]
164
+ return returned_response
162
165
 
163
- def _build(self, task_description: str, multimodal: bool, is_async: bool) -> Union[Tuple[str, int, str], Coroutine[Any, Any, Tuple[str, int, str]]]:
166
+ def _build(
167
+ self, task_description: str, multimodal: bool, is_async: bool
168
+ ) -> Union[Tuple[str, int, str], Coroutine[Any, Any, Tuple[str, int, str]]]:
164
169
  """Internal method to handle both synchronous and asynchronous build requests.
165
170
 
166
171
  Parameters
167
172
  ----------
168
173
  task_description : str
169
174
  A description of the task for which the function is being built.
170
-
175
+
171
176
  multimodal : bool
172
177
  Whether the function is multimodal or not.
173
-
178
+
174
179
  is_async : bool
175
180
  Whether to perform an asynchronous request.
176
181
 
@@ -212,7 +217,7 @@ class WecoAI:
212
217
  ----------
213
218
  task_description : str
214
219
  A description of the task for which the function is being built.
215
-
220
+
216
221
  multimodal : bool, optional
217
222
  Whether the function is multimodal or not (default is False).
218
223
 
@@ -230,7 +235,7 @@ class WecoAI:
230
235
  ----------
231
236
  task_description : str
232
237
  A description of the task for which the function is being built.
233
-
238
+
234
239
  multimodal : bool, optional
235
240
  Whether the function is multimodal or not (default is False).
236
241
 
@@ -385,7 +390,13 @@ class WecoAI:
385
390
  return image_info
386
391
 
387
392
  def _query(
388
- self, is_async: bool, fn_name: str, version_number: Optional[int], text_input: Optional[str], images_input: Optional[List[str]]
393
+ self,
394
+ is_async: bool,
395
+ fn_name: str,
396
+ version_number: Optional[int],
397
+ text_input: Optional[str],
398
+ images_input: Optional[List[str]],
399
+ return_reasoning: Optional[bool]
389
400
  ) -> Union[Dict[str, Any], Coroutine[Any, Any, Dict[str, Any]]]:
390
401
  """Internal method to handle both synchronous and asynchronous query requests.
391
402
 
@@ -401,6 +412,8 @@ class WecoAI:
401
412
  The text input to the function.
402
413
  images_input : List[str], optional
403
414
  A list of image URLs or images encoded in base64 with their metadata to be sent as input to the function.
415
+ return_reasoning : bool, optional
416
+ Whether to return reasoning for the output.
404
417
 
405
418
  Returns
406
419
  -------
@@ -427,7 +440,7 @@ class WecoAI:
427
440
 
428
441
  # Make the request
429
442
  endpoint = "query"
430
- data = {"name": fn_name, "text": text_input, "images": image_urls, "version_number": version_number}
443
+ data = {"name": fn_name, "text": text_input, "images": image_urls, "version_number": version_number, "return_reasoning": return_reasoning}
431
444
  request = self._make_request(endpoint=endpoint, data=data, is_async=is_async)
432
445
 
433
446
  if is_async:
@@ -442,7 +455,12 @@ class WecoAI:
442
455
  return self._process_query_response(response=response)
443
456
 
444
457
  async def aquery(
445
- self, fn_name: str, version_number: Optional[int] = -1, text_input: Optional[str] = "", images_input: Optional[List[str]] = []
458
+ self,
459
+ fn_name: str,
460
+ version_number: Optional[int] = -1,
461
+ text_input: Optional[str] = "",
462
+ images_input: Optional[List[str]] = [],
463
+ return_reasoning: Optional[bool] = False
446
464
  ) -> Dict[str, Any]:
447
465
  """Asynchronously queries a function with the given function ID and input.
448
466
 
@@ -456,6 +474,8 @@ class WecoAI:
456
474
  The text input to the function.
457
475
  images_input : List[str], optional
458
476
  A list of image URLs or images encoded in base64 with their metadata to be sent as input to the function.
477
+ return_reasoning : bool, optional
478
+ Whether to return reasoning for the output. Default is False.
459
479
 
460
480
  Returns
461
481
  -------
@@ -463,9 +483,18 @@ class WecoAI:
463
483
  A dictionary containing the output of the function, the number of input tokens, the number of output tokens,
464
484
  and the latency in milliseconds.
465
485
  """
466
- return await self._query(fn_name=fn_name, version_number=version_number, text_input=text_input, images_input=images_input, is_async=True)
467
-
468
- def query(self, fn_name: str, version_number: Optional[int] = -1, text_input: Optional[str] = "", images_input: Optional[List[str]] = []) -> Dict[str, Any]:
486
+ return await self._query(
487
+ fn_name=fn_name, version_number=version_number, text_input=text_input, images_input=images_input, return_reasoning=return_reasoning, is_async=True
488
+ )
489
+
490
+ def query(
491
+ self,
492
+ fn_name: str,
493
+ version_number: Optional[int] = -1,
494
+ text_input: Optional[str] = "",
495
+ images_input: Optional[List[str]] = [],
496
+ return_reasoning: Optional[bool] = False
497
+ ) -> Dict[str, Any]:
469
498
  """Synchronously queries a function with the given function ID and input.
470
499
 
471
500
  Parameters
@@ -478,6 +507,8 @@ class WecoAI:
478
507
  The text input to the function.
479
508
  images_input : List[str], optional
480
509
  A list of image URLs or images encoded in base64 with their metadata to be sent as input to the function.
510
+ return_reasoning : bool, optional
511
+ Whether to return reasoning for the output. Default is False.
481
512
 
482
513
  Returns
483
514
  -------
@@ -485,23 +516,27 @@ class WecoAI:
485
516
  A dictionary containing the output of the function, the number of input tokens, the number of output tokens,
486
517
  and the latency in milliseconds.
487
518
  """
488
- return self._query(fn_name=fn_name, version_number=version_number, text_input=text_input, images_input=images_input, is_async=False)
519
+ return self._query(
520
+ fn_name=fn_name, version_number=version_number, text_input=text_input, images_input=images_input, return_reasoning=return_reasoning, is_async=False
521
+ )
489
522
 
490
- def batch_query(self, fn_name: str, batch_inputs: List[Dict[str, Any]], version_number: Optional[int] = -1) -> List[Dict[str, Any]]:
523
+ def batch_query(
524
+ self, fn_name: str, batch_inputs: List[Dict[str, Any]], version_number: Optional[int] = -1, return_reasoning: Optional[bool] = False
525
+ ) -> List[Dict[str, Any]]:
491
526
  """Batch queries a function version with a list of inputs.
492
527
 
493
528
  Parameters
494
529
  ----------
495
530
  fn_name : str
496
531
  The name of the function or a list of function names to query.
497
-
498
532
  batch_inputs : List[Dict[str, Any]]
499
533
  A list of inputs for the functions to query. The input must be a dictionary containing the data to be processed. e.g.,
500
534
  when providing for a text input, the dictionary should be {"text_input": "input text"}, for an image input, the dictionary should be {"images_input": ["url1", "url2", ...]}
501
535
  and for a combination of text and image inputs, the dictionary should be {"text_input": "input text", "images_input": ["url1", "url2", ...]}.
502
-
503
536
  version_number : int, optional
504
537
  The version number of the function to query. If not provided, the latest version will be used. Pass -1 to use the latest version.
538
+ return_reasoning : bool, optional
539
+ Whether to return reasoning for the output. Default is False.
505
540
 
506
541
  Returns
507
542
  -------
@@ -509,11 +544,11 @@ class WecoAI:
509
544
  A list of dictionaries, each containing the output of a function query,
510
545
  in the same order as the input queries.
511
546
  """
547
+
512
548
  async def run_queries():
513
- tasks = list(map(
514
- lambda fn_input: self.aquery(fn_name=fn_name, version_number=version_number, **fn_input),
515
- batch_inputs
516
- ))
549
+ tasks = list(
550
+ map(lambda fn_input: self.aquery(fn_name=fn_name, version_number=version_number, return_reasoning=return_reasoning, **fn_input), batch_inputs)
551
+ )
517
552
  return await asyncio.gather(*tasks)
518
553
 
519
554
  return asyncio.run(run_queries())
@@ -48,7 +48,12 @@ async def abuild(task_description: str, multimodal: bool = False, api_key: str =
48
48
 
49
49
 
50
50
  def query(
51
- fn_name: str, version_number: Optional[int] = -1, text_input: Optional[str] = "", images_input: Optional[List[str]] = [], api_key: Optional[str] = None
51
+ fn_name: str,
52
+ version_number: Optional[int] = -1,
53
+ text_input: Optional[str] = "",
54
+ images_input: Optional[List[str]] = [],
55
+ return_reasoning: Optional[bool] = False,
56
+ api_key: Optional[str] = None,
52
57
  ) -> Dict[str, Any]:
53
58
  """Queries a function synchronously with the given function ID and input.
54
59
 
@@ -62,6 +67,8 @@ def query(
62
67
  The text input to the function.
63
68
  images_input : List[str], optional
64
69
  A list of image URLs or base64 encoded images to be used as input to the function.
70
+ return_reasoning : bool, optional
71
+ A flag to indicate if the reasoning should be returned. Default is False.
65
72
  api_key : str
66
73
  The API key for the WecoAI service. If not provided, the API key must be set using the environment variable - WECO_API_KEY.
67
74
 
@@ -72,12 +79,17 @@ def query(
72
79
  and the latency in milliseconds.
73
80
  """
74
81
  client = WecoAI(api_key=api_key)
75
- response = client.query(fn_name=fn_name, version_number=version_number, text_input=text_input, images_input=images_input)
82
+ response = client.query(fn_name=fn_name, version_number=version_number, text_input=text_input, images_input=images_input, return_reasoning=return_reasoning)
76
83
  return response
77
84
 
78
85
 
79
86
  async def aquery(
80
- fn_name: str, version_number: Optional[int] = -1, text_input: Optional[str] = "", images_input: Optional[List[str]] = [], api_key: Optional[str] = None
87
+ fn_name: str,
88
+ version_number: Optional[int] = -1,
89
+ text_input: Optional[str] = "",
90
+ images_input: Optional[List[str]] = [],
91
+ return_reasoning: Optional[bool] = False,
92
+ api_key: Optional[str] = None,
81
93
  ) -> Dict[str, Any]:
82
94
  """Queries a function asynchronously with the given function ID and input.
83
95
 
@@ -91,6 +103,8 @@ async def aquery(
91
103
  The text input to the function.
92
104
  images_input : List[str], optional
93
105
  A list of image URLs to be used as input to the function.
106
+ return_reasoning : bool, optional
107
+ A flag to indicate if the reasoning should be returned. Default is False.
94
108
  api_key : str
95
109
  The API key for the WecoAI service. If not provided, the API key must be set using the environment variable - WECO_API_KEY.
96
110
 
@@ -101,12 +115,14 @@ async def aquery(
101
115
  and the latency in milliseconds.
102
116
  """
103
117
  client = WecoAI(api_key=api_key)
104
- response = await client.aquery(fn_name=fn_name, version_number=version_number, text_input=text_input, images_input=images_input)
118
+ response = await client.aquery(
119
+ fn_name=fn_name, version_number=version_number, text_input=text_input, images_input=images_input, return_reasoning=return_reasoning
120
+ )
105
121
  return response
106
122
 
107
123
 
108
124
  def batch_query(
109
- fn_name: str, batch_inputs: List[Dict[str, Any]], version_number: Optional[int] = -1, api_key: Optional[str] = None
125
+ fn_name: str, batch_inputs: List[Dict[str, Any]], version_number: Optional[int] = -1, return_reasoning: Optional[bool] = False, api_key: Optional[str] = None
110
126
  ) -> List[Dict[str, Any]]:
111
127
  """Synchronously queries multiple functions using asynchronous calls internally.
112
128
 
@@ -119,15 +135,14 @@ def batch_query(
119
135
  The name of the function or a list of function names to query.
120
136
  Note that if a single function name is provided, it will be used for all queries.
121
137
  If a list of function names is provided, the length must match the number of queries.
122
-
123
138
  batch_inputs : List[str]
124
139
  A list of inputs for the functions to query. The input must be a dictionary containing the data to be processed. e.g.,
125
140
  when providing for a text input, the dictionary should be {"text_input": "input text"}, for an image input, the dictionary should be {"images_input": ["url1", "url2", ...]}
126
141
  and for a combination of text and image inputs, the dictionary should be {"text_input": "input text", "images_input": ["url1", "url2", ...]}.
127
-
128
142
  version_number : int, optional
129
143
  The version number of the function to query. If not provided, the latest version is used. Default is -1 for the same behavior.
130
-
144
+ return_reasoning : bool, optional
145
+ A flag to indicate if the reasoning should be returned. Default is False.
131
146
  api_key : str, optional
132
147
  The API key for the WecoAI service. If not provided, the API key must be set using the environment variable - WECO_API_KEY.
133
148
 
@@ -138,5 +153,5 @@ def batch_query(
138
153
  in the same order as the input queries.
139
154
  """
140
155
  client = WecoAI(api_key=api_key)
141
- responses = client.batch_query(fn_name=fn_name, version_number=version_number, batch_inputs=batch_inputs)
156
+ responses = client.batch_query(fn_name=fn_name, version_number=version_number, batch_inputs=batch_inputs, return_reasoning=return_reasoning)
142
157
  return responses
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: weco
3
- Version: 0.1.7
3
+ Version: 0.1.8
4
4
  Summary: A client facing API for interacting with the WeCo AI function builder service.
5
5
  Author-email: WeCo AI Team <dhruv@weco.ai>
6
6
  License: MIT
@@ -52,12 +52,16 @@ pip install weco
52
52
  ```
53
53
 
54
54
  ## Features
55
+ - Synchronous & Asynchronous client.
56
+ - Batch API
57
+ - Multimodality (Language & Vision)
58
+ - Interpretability (view the reasoning behind outputs)
59
+
60
+
61
+ ## What We Offer
55
62
 
56
63
  - The **build** function enables quick and easy prototyping of new functions via LLMs through just natural language. We encourage users to do this through our [web console](https://weco-app.vercel.app/function) for maximum control and ease of use, however, you can also do this through our API as shown in [here](examples/cookbook.ipynb).
57
64
  - The **query** function allows you to test and use the newly created function in your own code.
58
- - We offer asynchronous versions of the above clients.
59
- - We provide a **batch_query** functions that allows users to batch functions for various inputs as well as multiple inputs for the same function in a query. This is helpful to make a large number of queries more efficiently.
60
- - We also offer multimodality capabilities. You can now query our client with both **language** AND **vision** inputs!
61
65
 
62
66
  We provide both services in two ways:
63
67
  - `weco.WecoAI` client to be used when you want to maintain the same client service across a portion of code. This is better for dense service usage.
@@ -8,6 +8,7 @@ assets/weco.svg
8
8
  examples/cookbook.ipynb
9
9
  tests/test_asynchronous.py
10
10
  tests/test_batching.py
11
+ tests/test_reasoning.py
11
12
  tests/test_synchronous.py
12
13
  weco/__init__.py
13
14
  weco/client.py
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes