weco 0.1.7__tar.gz → 0.1.9__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -74,15 +74,12 @@ jobs:
74
74
  inputs: >-
75
75
  ./dist/*.tar.gz
76
76
  ./dist/*.whl
77
- - name: Debug Print github.ref_name
78
- run: >-
79
- echo "github.ref_name: ${{ github.ref_name }}"
80
77
  - name: Create GitHub Release
81
78
  env:
82
79
  GITHUB_TOKEN: ${{ github.token }}
83
80
  run: >-
84
81
  gh release create
85
- 'v0.1.7'
82
+ 'v0.1.9'
86
83
  --repo '${{ github.repository }}'
87
84
  --notes ""
88
85
  - name: Upload artifact signatures to GitHub Release
@@ -93,5 +90,5 @@ jobs:
93
90
  # sigstore-produced signatures and certificates.
94
91
  run: >-
95
92
  gh release upload
96
- 'v0.1.7' dist/**
93
+ 'v0.1.9' dist/**
97
94
  --repo '${{ github.repository }}'
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: weco
3
- Version: 0.1.7
3
+ Version: 0.1.9
4
4
  Summary: A client facing API for interacting with the WeCo AI function builder service.
5
5
  Author-email: WeCo AI Team <dhruv@weco.ai>
6
6
  License: MIT
@@ -52,12 +52,16 @@ pip install weco
52
52
  ```
53
53
 
54
54
  ## Features
55
+ - Synchronous & Asynchronous client.
56
+ - Batch API
57
+ - Multimodality (Language & Vision)
58
+ - Interpretability (view the reasoning behind outputs)
59
+
60
+
61
+ ## What We Offer
55
62
 
56
63
  - The **build** function enables quick and easy prototyping of new functions via LLMs through just natural language. We encourage users to do this through our [web console](https://weco-app.vercel.app/function) for maximum control and ease of use, however, you can also do this through our API as shown in [here](examples/cookbook.ipynb).
57
64
  - The **query** function allows you to test and use the newly created function in your own code.
58
- - We offer asynchronous versions of the above clients.
59
- - We provide a **batch_query** functions that allows users to batch functions for various inputs as well as multiple inputs for the same function in a query. This is helpful to make a large number of queries more efficiently.
60
- - We also offer multimodality capabilities. You can now query our client with both **language** AND **vision** inputs!
61
65
 
62
66
  We provide both services in two ways:
63
67
  - `weco.WecoAI` client to be used when you want to maintain the same client service across a portion of code. This is better for dense service usage.
@@ -25,12 +25,16 @@ pip install weco
25
25
  ```
26
26
 
27
27
  ## Features
28
+ - Synchronous & Asynchronous client.
29
+ - Batch API
30
+ - Multimodality (Language & Vision)
31
+ - Interpretability (view the reasoning behind outputs)
32
+
33
+
34
+ ## What We Offer
28
35
 
29
36
  - The **build** function enables quick and easy prototyping of new functions via LLMs through just natural language. We encourage users to do this through our [web console](https://weco-app.vercel.app/function) for maximum control and ease of use, however, you can also do this through our API as shown in [here](examples/cookbook.ipynb).
30
37
  - The **query** function allows you to test and use the newly created function in your own code.
31
- - We offer asynchronous versions of the above clients.
32
- - We provide a **batch_query** functions that allows users to batch functions for various inputs as well as multiple inputs for the same function in a query. This is helpful to make a large number of queries more efficiently.
33
- - We also offer multimodality capabilities. You can now query our client with both **language** AND **vision** inputs!
34
38
 
35
39
  We provide both services in two ways:
36
40
  - `weco.WecoAI` client to be used when you want to maintain the same client service across a portion of code. This is better for dense service usage.
@@ -144,7 +144,7 @@
144
144
  "with open(\"/path/to/home_exterior.jpeg\", \"rb\") as img_file:\n",
145
145
  " my_home_exterior = base64.b64encode(img_file.read()).decode('utf-8')\n",
146
146
  "\n",
147
- "response = query(\n",
147
+ "query_response = query(\n",
148
148
  " fn_name=fn_name,\n",
149
149
  " text_input=request,\n",
150
150
  " images_input=[\n",
@@ -154,7 +154,10 @@
154
154
  " ]\n",
155
155
  ")\n",
156
156
  "\n",
157
- "print(response)"
157
+ "for key, value in query_response[\"output\"].items(): print(f\"{key}: {value}\")\n",
158
+ "print(f\"Input Tokens: {query_response['in_tokens']}\")\n",
159
+ "print(f\"Output Tokens: {query_response['out_tokens']}\")\n",
160
+ "print(f\"Latency: {query_response['latency_ms']} ms\")"
158
161
  ]
159
162
  },
160
163
  {
@@ -214,7 +217,10 @@
214
217
  " fn_name=fn_name,\n",
215
218
  " text_input=\"I want to train a model to predict house prices using the Boston Housing dataset hosted on Kaggle.\"\n",
216
219
  ")\n",
217
- "for key, value in query_response.items(): print(f\"{key}: {value}\")"
220
+ "for key, value in query_response[\"output\"].items(): print(f\"{key}: {value}\")\n",
221
+ "print(f\"Input Tokens: {query_response['in_tokens']}\")\n",
222
+ "print(f\"Output Tokens: {query_response['out_tokens']}\")\n",
223
+ "print(f\"Latency: {query_response['latency_ms']} ms\")"
218
224
  ]
219
225
  },
220
226
  {
@@ -274,7 +280,12 @@
274
280
  "query_responses = batch_query(\n",
275
281
  " fn_names=fn_name,\n",
276
282
  " batch_inputs=[input_1, input_2]\n",
277
- ")"
283
+ ")\n",
284
+ "for i, query_response in enumerate(query_responses):\n",
285
+ " print(\"-\"*50)\n",
286
+ " print(f\"For input {i + 1}\")\n",
287
+ " for key, value in query_response[\"output\"].items(): print(f\"{key}: {value}\")\n",
288
+ " print(\"-\"*50)"
278
289
  ]
279
290
  },
280
291
  {
@@ -323,14 +334,49 @@
323
334
  " fn_name=fn_name,\n",
324
335
  " text_input=\"I want to train a model to predict house prices using the Boston Housing dataset hosted on Kaggle.\"\n",
325
336
  ")\n",
326
- "for key, value in query_response.items(): print(f\"{key}: {value}\")"
337
+ "for key, value in query_response[\"output\"].items(): print(f\"{key}: {value}\")\n",
338
+ "print(f\"Input Tokens: {query_response['in_tokens']}\")\n",
339
+ "print(f\"Output Tokens: {query_response['out_tokens']}\")\n",
340
+ "print(f\"Latency: {query_response['latency_ms']} ms\")"
327
341
  ]
328
342
  },
329
343
  {
330
344
  "cell_type": "markdown",
331
345
  "metadata": {},
332
346
  "source": [
333
- "## A/B Testing with Function Versions"
347
+ "## Interpretability"
348
+ ]
349
+ },
350
+ {
351
+ "cell_type": "markdown",
352
+ "metadata": {},
353
+ "source": [
354
+ "You can now understand why a model generated an output simply by passing `return_reasoning=True` at query time!"
355
+ ]
356
+ },
357
+ {
358
+ "cell_type": "code",
359
+ "execution_count": null,
360
+ "metadata": {},
361
+ "outputs": [],
362
+ "source": [
363
+ "from weco import build, query\n",
364
+ "\n",
365
+ "# Describe the task you want the function to perform\n",
366
+ "fn_name, fn_desc = build(task_description=task_description)\n",
367
+ "print(f\"AI Function {fn_name} built. This does the following - \\n{fn_desc}.\")\n",
368
+ "\n",
369
+ "# Query the function with a specific input\n",
370
+ "query_response = query(\n",
371
+ " fn_name=fn_name,\n",
372
+ " text_input=\"I want to train a model to predict house prices using the Boston Housing dataset hosted on Kaggle.\",\n",
373
+ " return_reasoning=True\n",
374
+ ")\n",
375
+ "for key, value in query_response[\"output\"].items(): print(f\"{key}: {value}\")\n",
376
+ "for i, step in enumerate(query_response[\"reasoning_steps\"]): print(f\"Step {i+1}: {step}\")\n",
377
+ "print(f\"Input Tokens: {query_response['in_tokens']}\")\n",
378
+ "print(f\"Output Tokens: {query_response['out_tokens']}\")\n",
379
+ "print(f\"Latency: {query_response['latency_ms']} ms\")"
334
380
  ]
335
381
  },
336
382
  {
@@ -10,7 +10,7 @@ authors = [
10
10
  ]
11
11
  description = "A client facing API for interacting with the WeCo AI function builder service."
12
12
  readme = "README.md"
13
- version = "0.1.7"
13
+ version = "0.1.9"
14
14
  license = {text = "MIT"}
15
15
  requires-python = ">=3.8"
16
16
  dependencies = ["asyncio", "httpx[http2]", "pillow"]
@@ -20,13 +20,14 @@ async def assert_query_response(query_response):
20
20
  assert isinstance(query_response["in_tokens"], int)
21
21
  assert isinstance(query_response["out_tokens"], int)
22
22
  assert isinstance(query_response["latency_ms"], float)
23
+ assert "reasoning_steps" not in query_response
23
24
 
24
25
 
25
26
  @pytest.fixture
26
27
  async def text_evaluator():
27
28
  fn_name, version_number, fn_desc = await abuild(
28
29
  task_description="Evaluate the sentiment of the given text. Provide a json object with 'sentiment' and 'explanation' keys.",
29
- multimodal=False
30
+ multimodal=False,
30
31
  )
31
32
  return fn_name, version_number, fn_desc
32
33
 
@@ -44,7 +45,7 @@ async def test_text_aquery(text_evaluator):
44
45
  async def image_evaluator():
45
46
  fn_name, version_number, fn_desc = await abuild(
46
47
  task_description="Describe the contents of the given images. Provide a json object with 'description' and 'objects' keys.",
47
- multimodal=True
48
+ multimodal=True,
48
49
  )
49
50
  return fn_name, version_number, fn_desc
50
51
 
@@ -69,7 +70,7 @@ async def test_image_aquery(image_evaluator):
69
70
  async def text_and_image_evaluator():
70
71
  fn_name, version_number, fn_desc = await abuild(
71
72
  task_description="Evaluate, solve and arrive at a numerical answer for the image provided. Provide a json object with 'answer' and 'explanation' keys.",
72
- multimodal=True
73
+ multimodal=True,
73
74
  )
74
75
  return fn_name, version_number, fn_desc
75
76
 
@@ -9,7 +9,7 @@ from weco import batch_query, build
9
9
  def ml_task_evaluator():
10
10
  fn_name, version_number, _ = build(
11
11
  task_description="I want to evaluate the feasibility of a machine learning task. Give me a json object with three keys - 'feasibility', 'justification', and 'suggestions'.",
12
- multimodal=False
12
+ multimodal=False,
13
13
  )
14
14
  return fn_name, version_number
15
15
 
@@ -18,7 +18,9 @@ def ml_task_evaluator():
18
18
  def ml_task_inputs():
19
19
  return [
20
20
  {"text_input": "I want to train a model to predict house prices using the Boston Housing dataset hosted on Kaggle."},
21
- {"text_input": "I want to train a model to classify digits using the MNIST dataset hosted on Kaggle using a Google Colab notebook."},
21
+ {
22
+ "text_input": "I want to train a model to classify digits using the MNIST dataset hosted on Kaggle using a Google Colab notebook."
23
+ },
22
24
  ]
23
25
 
24
26
 
@@ -26,7 +28,7 @@ def ml_task_inputs():
26
28
  def image_evaluator():
27
29
  fn_name, version_number, _ = build(
28
30
  task_description="Describe the contents of the given images. Provide a json object with 'description' and 'objects' keys.",
29
- multimodal=True
31
+ multimodal=True,
30
32
  )
31
33
  return fn_name, version_number
32
34
 
@@ -34,7 +36,11 @@ def image_evaluator():
34
36
  @pytest.fixture
35
37
  def image_inputs():
36
38
  return [
37
- {"images_input": ["https://www.integratedtreatmentservices.co.uk/wp-content/uploads/2013/12/Objects-of-Reference.jpg"]},
39
+ {
40
+ "images_input": [
41
+ "https://www.integratedtreatmentservices.co.uk/wp-content/uploads/2013/12/Objects-of-Reference.jpg"
42
+ ]
43
+ },
38
44
  {"images_input": ["https://t4.ftcdn.net/jpg/05/70/90/23/360_F_570902339_kNj1reH40GFXakTy98EmfiZHci2xvUCS.jpg"]},
39
45
  ]
40
46
 
@@ -70,6 +76,7 @@ def test_batch_query_image(image_evaluator, image_inputs):
70
76
  assert isinstance(query_response["in_tokens"], int)
71
77
  assert isinstance(query_response["out_tokens"], int)
72
78
  assert isinstance(query_response["latency_ms"], float)
79
+ assert "reasoning_steps" not in query_response
73
80
 
74
81
  output = query_response["output"]
75
- assert set(output.keys()) == {"description", "objects"}
82
+ assert set(output.keys()) == {"description", "objects"}
@@ -0,0 +1,58 @@
1
+ import pytest
2
+
3
+ from weco import build, query
4
+
5
+
6
+ def assert_query_response(query_response):
7
+ assert isinstance(query_response, dict)
8
+ assert isinstance(query_response["output"], dict)
9
+ assert isinstance(query_response["reasoning_steps"], list)
10
+ for step in query_response["reasoning_steps"]:
11
+ assert isinstance(step, str)
12
+ assert isinstance(query_response["in_tokens"], int)
13
+ assert isinstance(query_response["out_tokens"], int)
14
+ assert isinstance(query_response["latency_ms"], float)
15
+
16
+
17
+ @pytest.fixture
18
+ def text_reasoning_evaluator():
19
+ fn_name, version_number, fn_desc = build(
20
+ task_description="Evaluate the sentiment of the given text. Provide a json object with 'sentiment' and 'explanation' keys.",
21
+ multimodal=False,
22
+ )
23
+ return fn_name, version_number, fn_desc
24
+
25
+
26
+ def test_text_reasoning_query(text_reasoning_evaluator):
27
+ fn_name, version_number, _ = text_reasoning_evaluator
28
+ query_response = query(
29
+ fn_name=fn_name, version_number=version_number, text_input="I love this product!", return_reasoning=True
30
+ )
31
+
32
+ assert_query_response(query_response)
33
+ assert set(query_response["output"].keys()) == {"sentiment", "explanation"}
34
+
35
+
36
+ @pytest.fixture
37
+ def vision_reasoning_evaluator():
38
+ fn_name, version_number, fn_desc = build(
39
+ task_description="Evaluate, solve and arrive at a numerical answer for the image provided. Perform any additional things if instructed. Provide a json object with 'answer' and 'explanation' keys.",
40
+ multimodal=True,
41
+ )
42
+ return fn_name, version_number, fn_desc
43
+
44
+
45
+ def test_vision_reasoning_query(vision_reasoning_evaluator):
46
+ fn_name, version_number, _ = vision_reasoning_evaluator
47
+ query_response = query(
48
+ fn_name=fn_name,
49
+ version_number=version_number,
50
+ text_input="Find x and y.",
51
+ images_input=[
52
+ "https://i.ytimg.com/vi/cblHUeq3bkE/hq720.jpg?sqp=-oaymwEhCK4FEIIDSFryq4qpAxMIARUAAAAAGAElAADIQj0AgKJD&rs=AOn4CLAKn3piY91QRCBzRgnzAPf7MPrjDQ"
53
+ ],
54
+ return_reasoning=True,
55
+ )
56
+
57
+ assert_query_response(query_response)
58
+ assert set(query_response["output"].keys()) == {"answer", "explanation"}
@@ -19,13 +19,14 @@ def assert_query_response(query_response):
19
19
  assert isinstance(query_response["in_tokens"], int)
20
20
  assert isinstance(query_response["out_tokens"], int)
21
21
  assert isinstance(query_response["latency_ms"], float)
22
+ assert "reasoning_steps" not in query_response
22
23
 
23
24
 
24
25
  @pytest.fixture
25
26
  def text_evaluator():
26
27
  fn_name, version_number, fn_desc = build(
27
28
  task_description="Evaluate the sentiment of the given text. Provide a json object with 'sentiment' and 'explanation' keys.",
28
- multimodal=False
29
+ multimodal=False,
29
30
  )
30
31
  return fn_name, version_number, fn_desc
31
32
 
@@ -42,7 +43,7 @@ def test_text_query(text_evaluator):
42
43
  def image_evaluator():
43
44
  fn_name, version_number, fn_desc = build(
44
45
  task_description="Describe the contents of the given images. Provide a json object with 'description' and 'objects' keys.",
45
- multimodal=True
46
+ multimodal=True,
46
47
  )
47
48
  return fn_name, version_number, fn_desc
48
49
 
@@ -66,7 +67,7 @@ def test_image_query(image_evaluator):
66
67
  def text_and_image_evaluator():
67
68
  fn_name, version_number, fn_desc = build(
68
69
  task_description="Evaluate, solve and arrive at a numerical answer for the image provided. Perform any additional things if instructed. Provide a json object with 'answer' and 'explanation' keys.",
69
- multimodal=True
70
+ multimodal=True,
70
71
  )
71
72
  return fn_name, version_number, fn_desc
72
73
 
@@ -31,7 +31,7 @@ class WecoAI:
31
31
  ----------
32
32
  api_key : str
33
33
  The API key used for authentication.
34
-
34
+
35
35
  timeout : float
36
36
  The timeout for the HTTP requests in seconds. Default is 120.0.
37
37
 
@@ -39,7 +39,7 @@ class WecoAI:
39
39
  Whether to use HTTP/2 protocol for the HTTP requests. Default is True.
40
40
  """
41
41
 
42
- def __init__(self, api_key: str = None, timeout: float = 120.0, http2: bool = True) -> None:
42
+ def __init__(self, api_key: Union[str, None] = None, timeout: float = 120.0, http2: bool = True) -> None:
43
43
  """Initializes the WecoAI client with the provided API key and base URL.
44
44
 
45
45
  Parameters
@@ -68,6 +68,7 @@ class WecoAI:
68
68
  self.http2 = http2
69
69
  self.timeout = timeout
70
70
  self.base_url = "https://function.api.weco.ai"
71
+
71
72
  # Setup clients
72
73
  self.client = httpx.Client(http2=http2, timeout=timeout)
73
74
  self.async_client = httpx.AsyncClient(http2=http2, timeout=timeout)
@@ -153,24 +154,29 @@ class WecoAI:
153
154
  for _warning in response.get("warnings", []):
154
155
  warnings.warn(_warning)
155
156
 
156
- return {
157
+ returned_response = {
157
158
  "output": response["response"],
158
159
  "in_tokens": response["num_input_tokens"],
159
160
  "out_tokens": response["num_output_tokens"],
160
161
  "latency_ms": response["latency_ms"],
161
162
  }
163
+ if "reasoning_steps" in response:
164
+ returned_response["reasoning_steps"] = response["reasoning_steps"]
165
+ return returned_response
162
166
 
163
- def _build(self, task_description: str, multimodal: bool, is_async: bool) -> Union[Tuple[str, int, str], Coroutine[Any, Any, Tuple[str, int, str]]]:
167
+ def _build(
168
+ self, task_description: str, multimodal: bool, is_async: bool
169
+ ) -> Union[Tuple[str, int, str], Coroutine[Any, Any, Tuple[str, int, str]]]:
164
170
  """Internal method to handle both synchronous and asynchronous build requests.
165
171
 
166
172
  Parameters
167
173
  ----------
168
174
  task_description : str
169
175
  A description of the task for which the function is being built.
170
-
176
+
171
177
  multimodal : bool
172
178
  Whether the function is multimodal or not.
173
-
179
+
174
180
  is_async : bool
175
181
  Whether to perform an asynchronous request.
176
182
 
@@ -212,7 +218,7 @@ class WecoAI:
212
218
  ----------
213
219
  task_description : str
214
220
  A description of the task for which the function is being built.
215
-
221
+
216
222
  multimodal : bool, optional
217
223
  Whether the function is multimodal or not (default is False).
218
224
 
@@ -230,7 +236,7 @@ class WecoAI:
230
236
  ----------
231
237
  task_description : str
232
238
  A description of the task for which the function is being built.
233
-
239
+
234
240
  multimodal : bool, optional
235
241
  Whether the function is multimodal or not (default is False).
236
242
 
@@ -385,7 +391,13 @@ class WecoAI:
385
391
  return image_info
386
392
 
387
393
  def _query(
388
- self, is_async: bool, fn_name: str, version_number: Optional[int], text_input: Optional[str], images_input: Optional[List[str]]
394
+ self,
395
+ is_async: bool,
396
+ fn_name: str,
397
+ version_number: Optional[int],
398
+ text_input: Optional[str],
399
+ images_input: Optional[List[str]],
400
+ return_reasoning: Optional[bool],
389
401
  ) -> Union[Dict[str, Any], Coroutine[Any, Any, Dict[str, Any]]]:
390
402
  """Internal method to handle both synchronous and asynchronous query requests.
391
403
 
@@ -401,6 +413,8 @@ class WecoAI:
401
413
  The text input to the function.
402
414
  images_input : List[str], optional
403
415
  A list of image URLs or images encoded in base64 with their metadata to be sent as input to the function.
416
+ return_reasoning : bool, optional
417
+ Whether to return reasoning for the output.
404
418
 
405
419
  Returns
406
420
  -------
@@ -427,7 +441,13 @@ class WecoAI:
427
441
 
428
442
  # Make the request
429
443
  endpoint = "query"
430
- data = {"name": fn_name, "text": text_input, "images": image_urls, "version_number": version_number}
444
+ data = {
445
+ "name": fn_name,
446
+ "text": text_input,
447
+ "images": image_urls,
448
+ "version_number": version_number,
449
+ "return_reasoning": return_reasoning,
450
+ }
431
451
  request = self._make_request(endpoint=endpoint, data=data, is_async=is_async)
432
452
 
433
453
  if is_async:
@@ -442,7 +462,12 @@ class WecoAI:
442
462
  return self._process_query_response(response=response)
443
463
 
444
464
  async def aquery(
445
- self, fn_name: str, version_number: Optional[int] = -1, text_input: Optional[str] = "", images_input: Optional[List[str]] = []
465
+ self,
466
+ fn_name: str,
467
+ version_number: Optional[int] = -1,
468
+ text_input: Optional[str] = "",
469
+ images_input: Optional[List[str]] = [],
470
+ return_reasoning: Optional[bool] = False,
446
471
  ) -> Dict[str, Any]:
447
472
  """Asynchronously queries a function with the given function ID and input.
448
473
 
@@ -456,6 +481,8 @@ class WecoAI:
456
481
  The text input to the function.
457
482
  images_input : List[str], optional
458
483
  A list of image URLs or images encoded in base64 with their metadata to be sent as input to the function.
484
+ return_reasoning : bool, optional
485
+ Whether to return reasoning for the output. Default is False.
459
486
 
460
487
  Returns
461
488
  -------
@@ -463,9 +490,23 @@ class WecoAI:
463
490
  A dictionary containing the output of the function, the number of input tokens, the number of output tokens,
464
491
  and the latency in milliseconds.
465
492
  """
466
- return await self._query(fn_name=fn_name, version_number=version_number, text_input=text_input, images_input=images_input, is_async=True)
467
-
468
- def query(self, fn_name: str, version_number: Optional[int] = -1, text_input: Optional[str] = "", images_input: Optional[List[str]] = []) -> Dict[str, Any]:
493
+ return await self._query(
494
+ fn_name=fn_name,
495
+ version_number=version_number,
496
+ text_input=text_input,
497
+ images_input=images_input,
498
+ return_reasoning=return_reasoning,
499
+ is_async=True,
500
+ )
501
+
502
+ def query(
503
+ self,
504
+ fn_name: str,
505
+ version_number: Optional[int] = -1,
506
+ text_input: Optional[str] = "",
507
+ images_input: Optional[List[str]] = [],
508
+ return_reasoning: Optional[bool] = False,
509
+ ) -> Dict[str, Any]:
469
510
  """Synchronously queries a function with the given function ID and input.
470
511
 
471
512
  Parameters
@@ -478,6 +519,8 @@ class WecoAI:
478
519
  The text input to the function.
479
520
  images_input : List[str], optional
480
521
  A list of image URLs or images encoded in base64 with their metadata to be sent as input to the function.
522
+ return_reasoning : bool, optional
523
+ Whether to return reasoning for the output. Default is False.
481
524
 
482
525
  Returns
483
526
  -------
@@ -485,23 +528,36 @@ class WecoAI:
485
528
  A dictionary containing the output of the function, the number of input tokens, the number of output tokens,
486
529
  and the latency in milliseconds.
487
530
  """
488
- return self._query(fn_name=fn_name, version_number=version_number, text_input=text_input, images_input=images_input, is_async=False)
489
-
490
- def batch_query(self, fn_name: str, batch_inputs: List[Dict[str, Any]], version_number: Optional[int] = -1) -> List[Dict[str, Any]]:
531
+ return self._query(
532
+ fn_name=fn_name,
533
+ version_number=version_number,
534
+ text_input=text_input,
535
+ images_input=images_input,
536
+ return_reasoning=return_reasoning,
537
+ is_async=False,
538
+ )
539
+
540
+ def batch_query(
541
+ self,
542
+ fn_name: str,
543
+ batch_inputs: List[Dict[str, Any]],
544
+ version_number: Optional[int] = -1,
545
+ return_reasoning: Optional[bool] = False,
546
+ ) -> List[Dict[str, Any]]:
491
547
  """Batch queries a function version with a list of inputs.
492
548
 
493
549
  Parameters
494
550
  ----------
495
551
  fn_name : str
496
552
  The name of the function or a list of function names to query.
497
-
498
553
  batch_inputs : List[Dict[str, Any]]
499
554
  A list of inputs for the functions to query. The input must be a dictionary containing the data to be processed. e.g.,
500
555
  when providing for a text input, the dictionary should be {"text_input": "input text"}, for an image input, the dictionary should be {"images_input": ["url1", "url2", ...]}
501
556
  and for a combination of text and image inputs, the dictionary should be {"text_input": "input text", "images_input": ["url1", "url2", ...]}.
502
-
503
557
  version_number : int, optional
504
558
  The version number of the function to query. If not provided, the latest version will be used. Pass -1 to use the latest version.
559
+ return_reasoning : bool, optional
560
+ Whether to return reasoning for the output. Default is False.
505
561
 
506
562
  Returns
507
563
  -------
@@ -509,11 +565,16 @@ class WecoAI:
509
565
  A list of dictionaries, each containing the output of a function query,
510
566
  in the same order as the input queries.
511
567
  """
568
+
512
569
  async def run_queries():
513
- tasks = list(map(
514
- lambda fn_input: self.aquery(fn_name=fn_name, version_number=version_number, **fn_input),
515
- batch_inputs
516
- ))
570
+ tasks = list(
571
+ map(
572
+ lambda fn_input: self.aquery(
573
+ fn_name=fn_name, version_number=version_number, return_reasoning=return_reasoning, **fn_input
574
+ ),
575
+ batch_inputs,
576
+ )
577
+ )
517
578
  return await asyncio.gather(*tasks)
518
579
 
519
580
  return asyncio.run(run_queries())
@@ -48,7 +48,12 @@ async def abuild(task_description: str, multimodal: bool = False, api_key: str =
48
48
 
49
49
 
50
50
  def query(
51
- fn_name: str, version_number: Optional[int] = -1, text_input: Optional[str] = "", images_input: Optional[List[str]] = [], api_key: Optional[str] = None
51
+ fn_name: str,
52
+ version_number: Optional[int] = -1,
53
+ text_input: Optional[str] = "",
54
+ images_input: Optional[List[str]] = [],
55
+ return_reasoning: Optional[bool] = False,
56
+ api_key: Optional[str] = None,
52
57
  ) -> Dict[str, Any]:
53
58
  """Queries a function synchronously with the given function ID and input.
54
59
 
@@ -62,6 +67,8 @@ def query(
62
67
  The text input to the function.
63
68
  images_input : List[str], optional
64
69
  A list of image URLs or base64 encoded images to be used as input to the function.
70
+ return_reasoning : bool, optional
71
+ A flag to indicate if the reasoning should be returned. Default is False.
65
72
  api_key : str
66
73
  The API key for the WecoAI service. If not provided, the API key must be set using the environment variable - WECO_API_KEY.
67
74
 
@@ -72,12 +79,23 @@ def query(
72
79
  and the latency in milliseconds.
73
80
  """
74
81
  client = WecoAI(api_key=api_key)
75
- response = client.query(fn_name=fn_name, version_number=version_number, text_input=text_input, images_input=images_input)
82
+ response = client.query(
83
+ fn_name=fn_name,
84
+ version_number=version_number,
85
+ text_input=text_input,
86
+ images_input=images_input,
87
+ return_reasoning=return_reasoning,
88
+ )
76
89
  return response
77
90
 
78
91
 
79
92
  async def aquery(
80
- fn_name: str, version_number: Optional[int] = -1, text_input: Optional[str] = "", images_input: Optional[List[str]] = [], api_key: Optional[str] = None
93
+ fn_name: str,
94
+ version_number: Optional[int] = -1,
95
+ text_input: Optional[str] = "",
96
+ images_input: Optional[List[str]] = [],
97
+ return_reasoning: Optional[bool] = False,
98
+ api_key: Optional[str] = None,
81
99
  ) -> Dict[str, Any]:
82
100
  """Queries a function asynchronously with the given function ID and input.
83
101
 
@@ -91,6 +109,8 @@ async def aquery(
91
109
  The text input to the function.
92
110
  images_input : List[str], optional
93
111
  A list of image URLs to be used as input to the function.
112
+ return_reasoning : bool, optional
113
+ A flag to indicate if the reasoning should be returned. Default is False.
94
114
  api_key : str
95
115
  The API key for the WecoAI service. If not provided, the API key must be set using the environment variable - WECO_API_KEY.
96
116
 
@@ -101,12 +121,22 @@ async def aquery(
101
121
  and the latency in milliseconds.
102
122
  """
103
123
  client = WecoAI(api_key=api_key)
104
- response = await client.aquery(fn_name=fn_name, version_number=version_number, text_input=text_input, images_input=images_input)
124
+ response = await client.aquery(
125
+ fn_name=fn_name,
126
+ version_number=version_number,
127
+ text_input=text_input,
128
+ images_input=images_input,
129
+ return_reasoning=return_reasoning,
130
+ )
105
131
  return response
106
132
 
107
133
 
108
134
  def batch_query(
109
- fn_name: str, batch_inputs: List[Dict[str, Any]], version_number: Optional[int] = -1, api_key: Optional[str] = None
135
+ fn_name: str,
136
+ batch_inputs: List[Dict[str, Any]],
137
+ version_number: Optional[int] = -1,
138
+ return_reasoning: Optional[bool] = False,
139
+ api_key: Optional[str] = None,
110
140
  ) -> List[Dict[str, Any]]:
111
141
  """Synchronously queries multiple functions using asynchronous calls internally.
112
142
 
@@ -119,15 +149,14 @@ def batch_query(
119
149
  The name of the function or a list of function names to query.
120
150
  Note that if a single function name is provided, it will be used for all queries.
121
151
  If a list of function names is provided, the length must match the number of queries.
122
-
123
152
  batch_inputs : List[str]
124
153
  A list of inputs for the functions to query. The input must be a dictionary containing the data to be processed. e.g.,
125
154
  when providing for a text input, the dictionary should be {"text_input": "input text"}, for an image input, the dictionary should be {"images_input": ["url1", "url2", ...]}
126
155
  and for a combination of text and image inputs, the dictionary should be {"text_input": "input text", "images_input": ["url1", "url2", ...]}.
127
-
128
156
  version_number : int, optional
129
157
  The version number of the function to query. If not provided, the latest version is used. Default is -1 for the same behavior.
130
-
158
+ return_reasoning : bool, optional
159
+ A flag to indicate if the reasoning should be returned. Default is False.
131
160
  api_key : str, optional
132
161
  The API key for the WecoAI service. If not provided, the API key must be set using the environment variable - WECO_API_KEY.
133
162
 
@@ -138,5 +167,7 @@ def batch_query(
138
167
  in the same order as the input queries.
139
168
  """
140
169
  client = WecoAI(api_key=api_key)
141
- responses = client.batch_query(fn_name=fn_name, version_number=version_number, batch_inputs=batch_inputs)
170
+ responses = client.batch_query(
171
+ fn_name=fn_name, version_number=version_number, batch_inputs=batch_inputs, return_reasoning=return_reasoning
172
+ )
142
173
  return responses
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: weco
3
- Version: 0.1.7
3
+ Version: 0.1.9
4
4
  Summary: A client facing API for interacting with the WeCo AI function builder service.
5
5
  Author-email: WeCo AI Team <dhruv@weco.ai>
6
6
  License: MIT
@@ -52,12 +52,16 @@ pip install weco
52
52
  ```
53
53
 
54
54
  ## Features
55
+ - Synchronous & Asynchronous client.
56
+ - Batch API
57
+ - Multimodality (Language & Vision)
58
+ - Interpretability (view the reasoning behind outputs)
59
+
60
+
61
+ ## What We Offer
55
62
 
56
63
  - The **build** function enables quick and easy prototyping of new functions via LLMs through just natural language. We encourage users to do this through our [web console](https://weco-app.vercel.app/function) for maximum control and ease of use, however, you can also do this through our API as shown in [here](examples/cookbook.ipynb).
57
64
  - The **query** function allows you to test and use the newly created function in your own code.
58
- - We offer asynchronous versions of the above clients.
59
- - We provide a **batch_query** functions that allows users to batch functions for various inputs as well as multiple inputs for the same function in a query. This is helpful to make a large number of queries more efficiently.
60
- - We also offer multimodality capabilities. You can now query our client with both **language** AND **vision** inputs!
61
65
 
62
66
  We provide both services in two ways:
63
67
  - `weco.WecoAI` client to be used when you want to maintain the same client service across a portion of code. This is better for dense service usage.
@@ -8,6 +8,7 @@ assets/weco.svg
8
8
  examples/cookbook.ipynb
9
9
  tests/test_asynchronous.py
10
10
  tests/test_batching.py
11
+ tests/test_reasoning.py
11
12
  tests/test_synchronous.py
12
13
  weco/__init__.py
13
14
  weco/client.py
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes