windmill-cli 1.595.0 → 1.597.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,2 +1,2 @@
1
- export declare const SCRIPT_GUIDANCE = "\nEach script should be placed in a folder. Ask the user in which folder he wants the script to be located at before starting coding.\nAfter writing a script, you do not need to create .lock and .yaml files manually. Instead, you can run `wmill script generate-metadata` bash command. This command takes no arguments. After writing the script, you can ask the user if he wants to push the script with `wmill sync push`. Both should be run at the root of the repository.\n\nYou can use `wmill resource-type list --schema` to list all resource types available. You should use that to know the type of the resource you need to use in your script. You can use grep if the output is too long.\n\n# Windmill Script Writing Guide\n\n## General Principles\n\n- Scripts must export a main function (do not call it)\n- Libraries are installed automatically - do not show installation instructions\n- Credentials and configuration are stored in resources and passed as parameters\n- The windmill client (`wmill`) provides APIs for interacting with the platform\n\n## Function Naming\n\n- Main function: `main` (or `preprocessor` for preprocessor scripts)\n- Must be async for TypeScript variants\n\n## Return Values\n\n- Scripts can return any JSON-serializable value\n- Return values become available to subsequent flow steps via `results.step_id`\n\n## Preprocessor Scripts\n\nPreprocessor scripts process raw trigger data from various sources (webhook, custom HTTP route, SQS, WebSocket, Kafka, NATS, MQTT, Postgres, or email) before passing it to the flow. This separates the trigger logic from the flow logic and keeps the auto-generated UI clean.\n\nThe returned object determines the parameter values passed to the flow.\ne.g., `{ b: 1, a: 2 }` calls the flow with `a = 2` and `b = 1`, assuming the flow has two inputs called `a` and `b`.\n\nThe preprocessor receives a single parameter called `event`.\n\n\n# Bash\n\n## Structure\n\nDo not include `#!/bin/bash`. Arguments are obtained as positional parameters:\n\n```bash\n# Get arguments\nvar1=\"$1\"\nvar2=\"$2\"\n\necho \"Processing $var1 and $var2\"\n\n# Return JSON by echoing to stdout\necho \"{\\\"result\\\": \\\"$var1\\\", \\\"count\\\": $var2}\"\n```\n\n**Important:**\n- Do not include shebang (`#!/bin/bash`)\n- Arguments are always strings\n- Access with `$1`, `$2`, etc.\n\n## Output\n\nThe script output is captured as the result. For structured data, output valid JSON:\n\n```bash\nname=\"$1\"\ncount=\"$2\"\n\n# Output JSON result\ncat << EOF\n{\n \"name\": \"$name\",\n \"count\": $count,\n \"timestamp\": \"$(date -Iseconds)\"\n}\nEOF\n```\n\n## Environment Variables\n\nEnvironment variables set in Windmill are available:\n\n```bash\n# Access environment variable\necho \"Workspace: $WM_WORKSPACE\"\necho \"Job ID: $WM_JOB_ID\"\n```\n\n\n# BigQuery\n\nArguments use `@name` syntax.\n\nName the parameters by adding comments before the statement:\n\n```sql\n-- @name1 (string)\n-- @name2 (int64) = 0\nSELECT * FROM users WHERE name = @name1 AND age > @name2;\n```\n\n\n# TypeScript (Bun)\n\nBun runtime with full npm ecosystem and fastest execution.\n\n## Structure\n\nExport a single **async** function called `main`:\n\n```typescript\nexport async function main(param1: string, param2: number) {\n // Your code here\n return { result: param1, count: param2 };\n}\n```\n\nDo not call the main function. Libraries are installed automatically.\n\n## Resource Types\n\nOn Windmill, credentials and configuration are stored in resources and passed as parameters to main.\n\nUse the `RT` namespace for resource types:\n\n```typescript\nexport async function main(stripe: RT.Stripe) {\n // stripe contains API key and config from the resource\n}\n```\n\nOnly use resource types if you need them to satisfy the instructions. Always use the RT namespace.\n\n## Imports\n\n```typescript\nimport Stripe from \"stripe\";\nimport { someFunction } from \"some-package\";\n```\n\n## Windmill Client\n\nImport the windmill client for platform interactions:\n\n```typescript\nimport * as wmill from \"windmill-client\";\n```\n\nSee the SDK documentation for available methods.\n\n## Preprocessor Scripts\n\nFor preprocessor scripts, the function should be named `preprocessor` and receives an `event` parameter:\n\n```typescript\ntype Event = {\n kind:\n | \"webhook\"\n | \"http\"\n | \"websocket\"\n | \"kafka\"\n | \"email\"\n | \"nats\"\n | \"postgres\"\n | \"sqs\"\n | \"mqtt\"\n | \"gcp\";\n body: any;\n headers: Record<string, string>;\n query: Record<string, string>;\n};\n\nexport async function preprocessor(event: Event) {\n return {\n param1: event.body.field1,\n param2: event.query.id,\n };\n}\n```\n\n## S3 Object Operations\n\nWindmill provides built-in support for S3-compatible storage operations.\n\n### S3Object Type\n\nThe S3Object type represents a file in S3 storage:\n\n```typescript\ntype S3Object = {\n s3: string; // Path within the bucket\n};\n```\n\n## TypeScript Operations\n\n```typescript\nimport * as wmill from \"windmill-client\";\n\n// Load file content from S3\nconst content: Uint8Array = await wmill.loadS3File(s3object);\n\n// Load file as stream\nconst blob: Blob = await wmill.loadS3FileStream(s3object);\n\n// Write file to S3\nconst result: S3Object = await wmill.writeS3File(\n s3object, // Target path (or undefined to auto-generate)\n fileContent, // string or Blob\n s3ResourcePath // Optional: specific S3 resource to use\n);\n```\n\n\n# TypeScript (Bun Native)\n\nNative TypeScript execution with fetch only - no external imports allowed.\n\n## Structure\n\nExport a single **async** function called `main`:\n\n```typescript\nexport async function main(param1: string, param2: number) {\n // Your code here\n return { result: param1, count: param2 };\n}\n```\n\nDo not call the main function.\n\n## Resource Types\n\nOn Windmill, credentials and configuration are stored in resources and passed as parameters to main.\n\nUse the `RT` namespace for resource types:\n\n```typescript\nexport async function main(stripe: RT.Stripe) {\n // stripe contains API key and config from the resource\n}\n```\n\nOnly use resource types if you need them to satisfy the instructions. Always use the RT namespace.\n\n## Imports\n\n**No imports allowed.** Use the globally available `fetch` function:\n\n```typescript\nexport async function main(url: string) {\n const response = await fetch(url);\n return await response.json();\n}\n```\n\n## Windmill Client\n\nThe windmill client is not available in native TypeScript mode. Use fetch to call APIs directly.\n\n## Preprocessor Scripts\n\nFor preprocessor scripts, the function should be named `preprocessor` and receives an `event` parameter:\n\n```typescript\ntype Event = {\n kind:\n | \"webhook\"\n | \"http\"\n | \"websocket\"\n | \"kafka\"\n | \"email\"\n | \"nats\"\n | \"postgres\"\n | \"sqs\"\n | \"mqtt\"\n | \"gcp\";\n body: any;\n headers: Record<string, string>;\n query: Record<string, string>;\n};\n\nexport async function preprocessor(event: Event) {\n return {\n param1: event.body.field1,\n param2: event.query.id,\n };\n}\n```\n\n## S3 Object Operations\n\nWindmill provides built-in support for S3-compatible storage operations.\n\n### S3Object Type\n\nThe S3Object type represents a file in S3 storage:\n\n```typescript\ntype S3Object = {\n s3: string; // Path within the bucket\n};\n```\n\n## TypeScript Operations\n\n```typescript\nimport * as wmill from \"windmill-client\";\n\n// Load file content from S3\nconst content: Uint8Array = await wmill.loadS3File(s3object);\n\n// Load file as stream\nconst blob: Blob = await wmill.loadS3FileStream(s3object);\n\n// Write file to S3\nconst result: S3Object = await wmill.writeS3File(\n s3object, // Target path (or undefined to auto-generate)\n fileContent, // string or Blob\n s3ResourcePath // Optional: specific S3 resource to use\n);\n```\n\n\n# C#\n\nThe script must contain a public static `Main` method inside a class:\n\n```csharp\npublic class Script\n{\n public static object Main(string name, int count)\n {\n return new { Name = name, Count = count };\n }\n}\n```\n\n**Important:**\n- Class name is irrelevant\n- Method must be `public static`\n- Return type can be `object` or specific type\n\n## NuGet Packages\n\nAdd packages using the `#r` directive at the top:\n\n```csharp\n#r \"nuget: Newtonsoft.Json, 13.0.3\"\n#r \"nuget: RestSharp, 110.2.0\"\n\nusing Newtonsoft.Json;\nusing RestSharp;\n\npublic class Script\n{\n public static object Main(string url)\n {\n var client = new RestClient(url);\n var request = new RestRequest();\n var response = client.Get(request);\n return JsonConvert.DeserializeObject(response.Content);\n }\n}\n```\n\n\n# TypeScript (Deno)\n\nDeno runtime with npm support via `npm:` prefix and native Deno libraries.\n\n## Structure\n\nExport a single **async** function called `main`:\n\n```typescript\nexport async function main(param1: string, param2: number) {\n // Your code here\n return { result: param1, count: param2 };\n}\n```\n\nDo not call the main function. Libraries are installed automatically.\n\n## Resource Types\n\nOn Windmill, credentials and configuration are stored in resources and passed as parameters to main.\n\nUse the `RT` namespace for resource types:\n\n```typescript\nexport async function main(stripe: RT.Stripe) {\n // stripe contains API key and config from the resource\n}\n```\n\nOnly use resource types if you need them to satisfy the instructions. Always use the RT namespace.\n\n## Imports\n\n```typescript\n// npm packages use npm: prefix\nimport Stripe from \"npm:stripe\";\nimport { someFunction } from \"npm:some-package\";\n\n// Deno standard library\nimport { serve } from \"https://deno.land/std/http/server.ts\";\n```\n\n## Windmill Client\n\nImport the windmill client for platform interactions:\n\n```typescript\nimport * as wmill from \"windmill-client\";\n```\n\nSee the SDK documentation for available methods.\n\n## Preprocessor Scripts\n\nFor preprocessor scripts, the function should be named `preprocessor` and receives an `event` parameter:\n\n```typescript\ntype Event = {\n kind:\n | \"webhook\"\n | \"http\"\n | \"websocket\"\n | \"kafka\"\n | \"email\"\n | \"nats\"\n | \"postgres\"\n | \"sqs\"\n | \"mqtt\"\n | \"gcp\";\n body: any;\n headers: Record<string, string>;\n query: Record<string, string>;\n};\n\nexport async function preprocessor(event: Event) {\n return {\n param1: event.body.field1,\n param2: event.query.id,\n };\n}\n```\n\n## S3 Object Operations\n\nWindmill provides built-in support for S3-compatible storage operations.\n\n### S3Object Type\n\nThe S3Object type represents a file in S3 storage:\n\n```typescript\ntype S3Object = {\n s3: string; // Path within the bucket\n};\n```\n\n## TypeScript Operations\n\n```typescript\nimport * as wmill from \"windmill-client\";\n\n// Load file content from S3\nconst content: Uint8Array = await wmill.loadS3File(s3object);\n\n// Load file as stream\nconst blob: Blob = await wmill.loadS3FileStream(s3object);\n\n// Write file to S3\nconst result: S3Object = await wmill.writeS3File(\n s3object, // Target path (or undefined to auto-generate)\n fileContent, // string or Blob\n s3ResourcePath // Optional: specific S3 resource to use\n);\n```\n\n\n# DuckDB\n\nArguments are defined with comments and used with `$name` syntax:\n\n```sql\n-- $name (text) = default\n-- $age (integer)\nSELECT * FROM users WHERE name = $name AND age > $age;\n```\n\n## Ducklake Integration\n\nAttach Ducklake for data lake operations:\n\n```sql\n-- Main ducklake\nATTACH 'ducklake' AS dl;\n\n-- Named ducklake\nATTACH 'ducklake://my_lake' AS dl;\n\n-- Then query\nSELECT * FROM dl.schema.table;\n```\n\n## External Database Connections\n\nConnect to external databases using resources:\n\n```sql\nATTACH '$res:path/to/resource' AS db (TYPE postgres);\nSELECT * FROM db.schema.table;\n```\n\n## S3 File Operations\n\nRead files from S3 storage:\n\n```sql\n-- Default storage\nSELECT * FROM read_csv('s3:///path/to/file.csv');\n\n-- Named storage\nSELECT * FROM read_csv('s3://storage_name/path/to/file.csv');\n\n-- Parquet files\nSELECT * FROM read_parquet('s3:///path/to/file.parquet');\n\n-- JSON files\nSELECT * FROM read_json('s3:///path/to/file.json');\n```\n\n\n# Go\n\n## Structure\n\nThe file package must be `inner` and export a function called `main`:\n\n```go\npackage inner\n\nfunc main(param1 string, param2 int) (map[string]interface{}, error) {\n return map[string]interface{}{\n \"result\": param1,\n \"count\": param2,\n }, nil\n}\n```\n\n**Important:**\n- Package must be `inner`\n- Return type must be `({return_type}, error)`\n- Function name is `main` (lowercase)\n\n## Return Types\n\nThe return type can be any Go type that can be serialized to JSON:\n\n```go\npackage inner\n\ntype Result struct {\n Name string `json:\"name\"`\n Count int `json:\"count\"`\n}\n\nfunc main(name string, count int) (Result, error) {\n return Result{\n Name: name,\n Count: count,\n }, nil\n}\n```\n\n## Error Handling\n\nReturn errors as the second return value:\n\n```go\npackage inner\n\nimport \"errors\"\n\nfunc main(value int) (string, error) {\n if value < 0 {\n return \"\", errors.New(\"value must be positive\")\n }\n return \"success\", nil\n}\n```\n\n\n# GraphQL\n\n## Structure\n\nWrite GraphQL queries or mutations. Arguments can be added as query parameters:\n\n```graphql\nquery GetUser($id: ID!) {\n user(id: $id) {\n id\n name\n email\n }\n}\n```\n\n## Variables\n\nVariables are passed as script arguments and automatically bound to the query:\n\n```graphql\nquery SearchProducts($query: String!, $limit: Int = 10) {\n products(search: $query, first: $limit) {\n edges {\n node {\n id\n name\n price\n }\n }\n }\n}\n```\n\n## Mutations\n\n```graphql\nmutation CreateUser($input: CreateUserInput!) {\n createUser(input: $input) {\n id\n name\n createdAt\n }\n}\n```\n\n\n# Java\n\nThe script must contain a Main public class with a `public static main()` method:\n\n```java\npublic class Main {\n public static Object main(String name, int count) {\n java.util.Map<String, Object> result = new java.util.HashMap<>();\n result.put(\"name\", name);\n result.put(\"count\", count);\n return result;\n }\n}\n```\n\n**Important:**\n- Class must be named `Main`\n- Method must be `public static Object main(...)`\n- Return type is `Object` or `void`\n\n## Maven Dependencies\n\nAdd dependencies using comments at the top:\n\n```java\n//requirements:\n//com.google.code.gson:gson:2.10.1\n//org.apache.httpcomponents:httpclient:4.5.14\n\nimport com.google.gson.Gson;\n\npublic class Main {\n public static Object main(String input) {\n Gson gson = new Gson();\n return gson.fromJson(input, Object.class);\n }\n}\n```\n\n\n# Microsoft SQL Server (MSSQL)\n\nArguments use `@P1`, `@P2`, etc.\n\nName the parameters by adding comments before the statement:\n\n```sql\n-- @P1 name1 (varchar)\n-- @P2 name2 (int) = 0\nSELECT * FROM users WHERE name = @P1 AND age > @P2;\n```\n\n\n# MySQL\n\nArguments use `?` placeholders.\n\nName the parameters by adding comments before the statement:\n\n```sql\n-- ? name1 (text)\n-- ? name2 (int) = 0\nSELECT * FROM users WHERE name = ? AND age > ?;\n```\n\n\n# TypeScript (Native)\n\nNative TypeScript execution with fetch only - no external imports allowed.\n\n## Structure\n\nExport a single **async** function called `main`:\n\n```typescript\nexport async function main(param1: string, param2: number) {\n // Your code here\n return { result: param1, count: param2 };\n}\n```\n\nDo not call the main function.\n\n## Resource Types\n\nOn Windmill, credentials and configuration are stored in resources and passed as parameters to main.\n\nUse the `RT` namespace for resource types:\n\n```typescript\nexport async function main(stripe: RT.Stripe) {\n // stripe contains API key and config from the resource\n}\n```\n\nOnly use resource types if you need them to satisfy the instructions. Always use the RT namespace.\n\n## Imports\n\n**No imports allowed.** Use the globally available `fetch` function:\n\n```typescript\nexport async function main(url: string) {\n const response = await fetch(url);\n return await response.json();\n}\n```\n\n## Windmill Client\n\nThe windmill client is not available in native TypeScript mode. Use fetch to call APIs directly.\n\n## Preprocessor Scripts\n\nFor preprocessor scripts, the function should be named `preprocessor` and receives an `event` parameter:\n\n```typescript\ntype Event = {\n kind:\n | \"webhook\"\n | \"http\"\n | \"websocket\"\n | \"kafka\"\n | \"email\"\n | \"nats\"\n | \"postgres\"\n | \"sqs\"\n | \"mqtt\"\n | \"gcp\";\n body: any;\n headers: Record<string, string>;\n query: Record<string, string>;\n};\n\nexport async function preprocessor(event: Event) {\n return {\n param1: event.body.field1,\n param2: event.query.id\n };\n}\n```\n\n\n# PHP\n\n## Structure\n\nThe script must start with `<?php` and contain at least one function called `main`:\n\n```php\n<?php\n\nfunction main(string $param1, int $param2) {\n return [\"result\" => $param1, \"count\" => $param2];\n}\n```\n\n## Resource Types\n\nOn Windmill, credentials and configuration are stored in resources and passed as parameters to main.\n\nYou need to **redefine** the type of the resources that are needed before the main function. Always check if the class already exists using `class_exists`:\n\n```php\n<?php\n\nif (!class_exists('Postgresql')) {\n class Postgresql {\n public string $host;\n public int $port;\n public string $user;\n public string $password;\n public string $dbname;\n }\n}\n\nfunction main(Postgresql $db) {\n // $db contains the database connection details\n}\n```\n\nThe resource type name has to be exactly as specified.\n\n## Library Dependencies\n\nSpecify library dependencies as comments before the main function:\n\n```php\n<?php\n\n// require:\n// guzzlehttp/guzzle\n// stripe/stripe-php@^10.0\n\nfunction main() {\n // Libraries are available\n}\n```\n\nOne dependency per line. No need to require autoload, it is already done.\n\n\n# PostgreSQL\n\nArguments are obtained directly in the statement with `$1::{type}`, `$2::{type}`, etc.\n\nName the parameters by adding comments at the beginning of the script (without specifying the type):\n\n```sql\n-- $1 name1\n-- $2 name2 = default_value\nSELECT * FROM users WHERE name = $1::TEXT AND age > $2::INT;\n```\n\n\n# PowerShell\n\n## Structure\n\nArguments are obtained by calling the `param` function on the first line:\n\n```powershell\nparam($Name, $Count = 0, [int]$Age)\n\n# Your code here\nWrite-Output \"Processing $Name, count: $Count, age: $Age\"\n\n# Return object\n@{\n name = $Name\n count = $Count\n age = $Age\n}\n```\n\n## Parameter Types\n\nYou can specify types for parameters:\n\n```powershell\nparam(\n [string]$Name,\n [int]$Count = 0,\n [bool]$Enabled = $true,\n [array]$Items\n)\n\n@{\n name = $Name\n count = $Count\n enabled = $Enabled\n items = $Items\n}\n```\n\n## Return Values\n\nReturn values by outputting them at the end of the script:\n\n```powershell\nparam($Input)\n\n$result = @{\n processed = $true\n data = $Input\n timestamp = Get-Date -Format \"o\"\n}\n\n$result\n```\n\n\n# Python\n\n## Structure\n\nThe script must contain at least one function called `main`:\n\n```python\ndef main(param1: str, param2: int):\n # Your code here\n return {\"result\": param1, \"count\": param2}\n```\n\nDo not call the main function. Libraries are installed automatically.\n\n## Resource Types\n\nOn Windmill, credentials and configuration are stored in resources and passed as parameters to main.\n\nYou need to **redefine** the type of the resources that are needed before the main function as TypedDict:\n\n```python\nfrom typing import TypedDict\n\nclass postgresql(TypedDict):\n host: str\n port: int\n user: str\n password: str\n dbname: str\n\ndef main(db: postgresql):\n # db contains the database connection details\n pass\n```\n\n**Important rules:**\n\n- The resource type name must be **IN LOWERCASE**\n- Only include resource types if they are actually needed\n- If an import conflicts with a resource type name, **rename the imported object, not the type name**\n- Make sure to import TypedDict from typing **if you're using it**\n\n## Imports\n\nLibraries are installed automatically. Do not show installation instructions.\n\n```python\nimport requests\nimport pandas as pd\nfrom datetime import datetime\n```\n\nIf an import name conflicts with a resource type:\n\n```python\n# Wrong - don't rename the type\nimport stripe as stripe_lib\nclass stripe_type(TypedDict): ...\n\n# Correct - rename the import\nimport stripe as stripe_sdk\nclass stripe(TypedDict):\n api_key: str\n```\n\n## Windmill Client\n\nImport the windmill client for platform interactions:\n\n```python\nimport wmill\n```\n\nSee the SDK documentation for available methods.\n\n## Preprocessor Scripts\n\nFor preprocessor scripts, the function should be named `preprocessor` and receives an `event` parameter:\n\n```python\nfrom typing import TypedDict, Literal, Any\n\nclass Event(TypedDict):\n kind: Literal[\"webhook\", \"http\", \"websocket\", \"kafka\", \"email\", \"nats\", \"postgres\", \"sqs\", \"mqtt\", \"gcp\"]\n body: Any\n headers: dict[str, str]\n query: dict[str, str]\n\ndef preprocessor(event: Event):\n # Transform the event into flow input parameters\n return {\n \"param1\": event[\"body\"][\"field1\"],\n \"param2\": event[\"query\"][\"id\"]\n }\n```\n\n## S3 Object Operations\n\nWindmill provides built-in support for S3-compatible storage operations.\n\n```python\nimport wmill\n\n# Load file content from S3\ncontent: bytes = wmill.load_s3_file(s3object)\n\n# Load file as stream reader\nreader: BufferedReader = wmill.load_s3_file_reader(s3object)\n\n# Write file to S3\nresult: S3Object = wmill.write_s3_file(\n s3object, # Target path (or None to auto-generate)\n file_content, # bytes or BufferedReader\n s3_resource_path, # Optional: specific S3 resource\n content_type, # Optional: MIME type\n content_disposition # Optional: Content-Disposition header\n)\n```\n\n\n# Rust\n\n## Structure\n\nThe script must contain a function called `main` with proper return type:\n\n```rust\nuse anyhow::anyhow;\nuse serde::Serialize;\n\n#[derive(Serialize, Debug)]\nstruct ReturnType {\n result: String,\n count: i32,\n}\n\nfn main(param1: String, param2: i32) -> anyhow::Result<ReturnType> {\n Ok(ReturnType {\n result: param1,\n count: param2,\n })\n}\n```\n\n**Important:**\n- Arguments should be owned types\n- Return type must be serializable (`#[derive(Serialize)]`)\n- Return type is `anyhow::Result<T>`\n\n## Dependencies\n\nPackages must be specified with a partial cargo.toml at the beginning of the script:\n\n```rust\n//! ```cargo\n//! [dependencies]\n//! anyhow = \"1.0.86\"\n//! reqwest = { version = \"0.11\", features = [\"json\"] }\n//! tokio = { version = \"1\", features = [\"full\"] }\n//! ```\n\nuse anyhow::anyhow;\n// ... rest of the code\n```\n\n**Note:** Serde is already included, no need to add it again.\n\n## Async Functions\n\nIf you need to handle async functions (e.g., using tokio), keep the main function sync and create the runtime inside:\n\n```rust\n//! ```cargo\n//! [dependencies]\n//! anyhow = \"1.0.86\"\n//! tokio = { version = \"1\", features = [\"full\"] }\n//! reqwest = { version = \"0.11\", features = [\"json\"] }\n//! ```\n\nuse anyhow::anyhow;\nuse serde::Serialize;\n\n#[derive(Serialize, Debug)]\nstruct Response {\n data: String,\n}\n\nfn main(url: String) -> anyhow::Result<Response> {\n let rt = tokio::runtime::Runtime::new()?;\n rt.block_on(async {\n let resp = reqwest::get(&url).await?.text().await?;\n Ok(Response { data: resp })\n })\n}\n```\n\n\n# Snowflake\n\nArguments use `?` placeholders.\n\nName the parameters by adding comments before the statement:\n\n```sql\n-- ? name1 (text)\n-- ? name2 (number) = 0\nSELECT * FROM users WHERE name = ? AND age > ?;\n```\n\n\n# TypeScript SDK (windmill-client)\n\nImport: import * as wmill from 'windmill-client'\n\n/**\n * Initialize the Windmill client with authentication token and base URL\n * @param token - Authentication token (defaults to WM_TOKEN env variable)\n * @param baseUrl - API base URL (defaults to BASE_INTERNAL_URL or BASE_URL env variable)\n */\nsetClient(token?: string, baseUrl?: string): void\n\n/**\n * Create a client configuration from env variables\n * @returns client configuration\n */\ngetWorkspace(): string\n\n/**\n * Get a resource value by path\n * @param path path of the resource, default to internal state path\n * @param undefinedIfEmpty if the resource does not exist, return undefined instead of throwing an error\n * @returns resource value\n */\nasync getResource(path?: string, undefinedIfEmpty?: boolean): Promise<any>\n\n/**\n * Get the true root job id\n * @param jobId job id to get the root job id from (default to current job)\n * @returns root job id\n */\nasync getRootJobId(jobId?: string): Promise<string>\n\n/**\n * @deprecated Use runScriptByPath or runScriptByHash instead\n */\nasync runScript(path: string | null = null, hash_: string | null = null, args: Record<string, any> | null = null, verbose: boolean = false): Promise<any>\n\n/**\n * Run a script synchronously by its path and wait for the result\n * @param path - Script path in Windmill\n * @param args - Arguments to pass to the script\n * @param verbose - Enable verbose logging\n * @returns Script execution result\n */\nasync runScriptByPath(path: string, args: Record<string, any> | null = null, verbose: boolean = false): Promise<any>\n\n/**\n * Run a script synchronously by its hash and wait for the result\n * @param hash_ - Script hash in Windmill\n * @param args - Arguments to pass to the script\n * @param verbose - Enable verbose logging\n * @returns Script execution result\n */\nasync runScriptByHash(hash_: string, args: Record<string, any> | null = null, verbose: boolean = false): Promise<any>\n\n/**\n * Append a text to the result stream\n * @param text text to append to the result stream\n */\nappendToResultStream(text: string): void\n\n/**\n * Stream to the result stream\n * @param stream stream to stream to the result stream\n */\nasync streamResult(stream: AsyncIterable<string>): Promise<void>\n\n/**\n * Run a flow synchronously by its path and wait for the result\n * @param path - Flow path in Windmill\n * @param args - Arguments to pass to the flow\n * @param verbose - Enable verbose logging\n * @returns Flow execution result\n */\nasync runFlow(path: string | null = null, args: Record<string, any> | null = null, verbose: boolean = false): Promise<any>\n\n/**\n * Wait for a job to complete and return its result\n * @param jobId - ID of the job to wait for\n * @param verbose - Enable verbose logging\n * @returns Job result when completed\n */\nasync waitJob(jobId: string, verbose: boolean = false): Promise<any>\n\n/**\n * Get the result of a completed job\n * @param jobId - ID of the completed job\n * @returns Job result\n */\nasync getResult(jobId: string): Promise<any>\n\n/**\n * Get the result of a job if completed, or its current status\n * @param jobId - ID of the job\n * @returns Object with started, completed, success, and result properties\n */\nasync getResultMaybe(jobId: string): Promise<any>\n\n/**\n * Wrap a function to execute as a Windmill task within a flow context\n * @param f - Function to wrap as a task\n * @returns Async wrapper function that executes as a Windmill job\n */\ntask<P, T>(f: (_: P) => T): (_: P) => Promise<T>\n\n/**\n * @deprecated Use runScriptByPathAsync or runScriptByHashAsync instead\n */\nasync runScriptAsync(path: string | null, hash_: string | null, args: Record<string, any> | null, scheduledInSeconds: number | null = null): Promise<string>\n\n/**\n * Run a script asynchronously by its path\n * @param path - Script path in Windmill\n * @param args - Arguments to pass to the script\n * @param scheduledInSeconds - Schedule execution for a future time (in seconds)\n * @returns Job ID of the created job\n */\nasync runScriptByPathAsync(path: string, args: Record<string, any> | null = null, scheduledInSeconds: number | null = null): Promise<string>\n\n/**\n * Run a script asynchronously by its hash\n * @param hash_ - Script hash in Windmill\n * @param args - Arguments to pass to the script\n * @param scheduledInSeconds - Schedule execution for a future time (in seconds)\n * @returns Job ID of the created job\n */\nasync runScriptByHashAsync(hash_: string, args: Record<string, any> | null = null, scheduledInSeconds: number | null = null): Promise<string>\n\n/**\n * Run a flow asynchronously by its path\n * @param path - Flow path in Windmill\n * @param args - Arguments to pass to the flow\n * @param scheduledInSeconds - Schedule execution for a future time (in seconds)\n * @param doNotTrackInParent - If false, tracks state in parent job (only use when fully awaiting the job)\n * @returns Job ID of the created job\n */\nasync runFlowAsync(path: string | null, args: Record<string, any> | null, scheduledInSeconds: number | null = null, // can only be set to false if this the job will be fully await and not concurrent with any other job // as otherwise the child flow and its own child will store their state in the parent job which will // lead to incorrectness and failures doNotTrackInParent: boolean = true): Promise<string>\n\n/**\n * Resolve a resource value in case the default value was picked because the input payload was undefined\n * @param obj resource value or path of the resource under the format `$res:path`\n * @returns resource value\n */\nasync resolveDefaultResource(obj: any): Promise<any>\n\n/**\n * Get the state file path from environment variables\n * @returns State path string\n */\ngetStatePath(): string\n\n/**\n * Set a resource value by path\n * @param path path of the resource to set, default to state path\n * @param value new value of the resource to set\n * @param initializeToTypeIfNotExist if the resource does not exist, initialize it with this type\n */\nasync setResource(value: any, path?: string, initializeToTypeIfNotExist?: string): Promise<void>\n\n/**\n * Set the state\n * @param state state to set\n * @deprecated use setState instead\n */\nasync setInternalState(state: any): Promise<void>\n\n/**\n * Set the state\n * @param state state to set\n */\nasync setState(state: any): Promise<void>\n\n/**\n * Set the progress\n * Progress cannot go back and limited to 0% to 99% range\n * @param percent Progress to set in %\n * @param jobId? Job to set progress for\n */\nasync setProgress(percent: number, jobId?: any): Promise<void>\n\n/**\n * Get the progress\n * @param jobId? Job to get progress from\n * @returns Optional clamped between 0 and 100 progress value\n */\nasync getProgress(jobId?: any): Promise<number | null>\n\n/**\n * Set a flow user state\n * @param key key of the state\n * @param value value of the state\n */\nasync setFlowUserState(key: string, value: any, errorIfNotPossible?: boolean): Promise<void>\n\n/**\n * Get a flow user state\n * @param path path of the variable\n */\nasync getFlowUserState(key: string, errorIfNotPossible?: boolean): Promise<any>\n\n/**\n * Get the internal state\n * @deprecated use getState instead\n */\nasync getInternalState(): Promise<any>\n\n/**\n * Get the state shared across executions\n */\nasync getState(): Promise<any>\n\n/**\n * Get a variable by path\n * @param path path of the variable\n * @returns variable value\n */\nasync getVariable(path: string): Promise<string>\n\n/**\n * Set a variable by path, create if not exist\n * @param path path of the variable\n * @param value value of the variable\n * @param isSecretIfNotExist if the variable does not exist, create it as secret or not (default: false)\n * @param descriptionIfNotExist if the variable does not exist, create it with this description (default: \"\")\n */\nasync setVariable(path: string, value: string, isSecretIfNotExist?: boolean, descriptionIfNotExist?: string): Promise<void>\n\n/**\n * Build a PostgreSQL connection URL from a database resource\n * @param path - Path to the database resource\n * @returns PostgreSQL connection URL string\n */\nasync databaseUrlFromResource(path: string): Promise<string>\n\n/**\n * Get S3 client settings from a resource or workspace default\n * @param s3_resource_path - Path to S3 resource (uses workspace default if undefined)\n * @returns S3 client configuration settings\n */\nasync denoS3LightClientSettings(s3_resource_path: string | undefined): Promise<DenoS3LightClientSettings>\n\n/**\n * Load the content of a file stored in S3. If the s3ResourcePath is undefined, it will default to the workspace S3 resource.\n * \n * ```typescript\n * let fileContent = await wmill.loadS3FileContent(inputFile)\n * // if the file is a raw text file, it can be decoded and printed directly:\n * const text = new TextDecoder().decode(fileContentStream)\n * console.log(text);\n * ```\n */\nasync loadS3File(s3object: S3Object, s3ResourcePath: string | undefined = undefined): Promise<Uint8Array | undefined>\n\n/**\n * Load the content of a file stored in S3 as a stream. If the s3ResourcePath is undefined, it will default to the workspace S3 resource.\n * \n * ```typescript\n * let fileContentBlob = await wmill.loadS3FileStream(inputFile)\n * // if the content is plain text, the blob can be read directly:\n * console.log(await fileContentBlob.text());\n * ```\n */\nasync loadS3FileStream(s3object: S3Object, s3ResourcePath: string | undefined = undefined): Promise<Blob | undefined>\n\n/**\n * Persist a file to the S3 bucket. If the s3ResourcePath is undefined, it will default to the workspace S3 resource.\n * \n * ```typescript\n * const s3object = await writeS3File(s3Object, \"Hello Windmill!\")\n * const fileContentAsUtf8Str = (await s3object.toArray()).toString('utf-8')\n * console.log(fileContentAsUtf8Str)\n * ```\n */\nasync writeS3File(s3object: S3Object | undefined, fileContent: string | Blob, s3ResourcePath: string | undefined = undefined, contentType: string | undefined = undefined, contentDisposition: string | undefined = undefined): Promise<S3Object>\n\n/**\n * Sign S3 objects to be used by anonymous users in public apps\n * @param s3objects s3 objects to sign\n * @returns signed s3 objects\n */\nasync signS3Objects(s3objects: S3Object[]): Promise<S3Object[]>\n\n/**\n * Sign S3 object to be used by anonymous users in public apps\n * @param s3object s3 object to sign\n * @returns signed s3 object\n */\nasync signS3Object(s3object: S3Object): Promise<S3Object>\n\n/**\n * Generate a presigned public URL for an array of S3 objects.\n * If an S3 object is not signed yet, it will be signed first.\n * @param s3Objects s3 objects to sign\n * @returns list of signed public URLs\n */\nasync getPresignedS3PublicUrls(s3Objects: S3Object[], { baseUrl }: { baseUrl?: string } = {}): Promise<string[]>\n\n/**\n * Generate a presigned public URL for an S3 object. If the S3 object is not signed yet, it will be signed first.\n * @param s3Object s3 object to sign\n * @returns signed public URL\n */\nasync getPresignedS3PublicUrl(s3Objects: S3Object, { baseUrl }: { baseUrl?: string } = {}): Promise<string>\n\n/**\n * Get URLs needed for resuming a flow after this step\n * @param approver approver name\n * @returns approval page UI URL, resume and cancel API URLs for resuming the flow\n */\nasync getResumeUrls(approver?: string): Promise<{\n approvalPage: string;\n resume: string;\n cancel: string;\n}>\n\n/**\n * @deprecated use getResumeUrls instead\n */\ngetResumeEndpoints(approver?: string): Promise<{\n approvalPage: string;\n resume: string;\n cancel: string;\n}>\n\n/**\n * Get an OIDC jwt token for auth to external services (e.g: Vault, AWS) (ee only)\n * @param audience audience of the token\n * @param expiresIn Optional number of seconds until the token expires\n * @returns jwt token\n */\nasync getIdToken(audience: string, expiresIn?: number): Promise<string>\n\n/**\n * Convert a base64-encoded string to Uint8Array\n * @param data - Base64-encoded string\n * @returns Decoded Uint8Array\n */\nbase64ToUint8Array(data: string): Uint8Array\n\n/**\n * Convert a Uint8Array to base64-encoded string\n * @param arrayBuffer - Uint8Array to encode\n * @returns Base64-encoded string\n */\nuint8ArrayToBase64(arrayBuffer: Uint8Array): string\n\n/**\n * Get email from workspace username\n * This method is particularly useful for apps that require the email address of the viewer.\n * Indeed, in the viewer context, WM_USERNAME is set to the username of the viewer but WM_EMAIL is set to the email of the creator of the app.\n * @param username\n * @returns email address\n */\nasync usernameToEmail(username: string): Promise<string>\n\n/**\n * Sends an interactive approval request via Slack, allowing optional customization of the message, approver, and form fields.\n * \n * **[Enterprise Edition Only]** To include form fields in the Slack approval request, go to **Advanced -> Suspend -> Form**\n * and define a form. Learn more at [Windmill Documentation](https://www.windmill.dev/docs/flows/flow_approval#form).\n * \n * @param {Object} options - The configuration options for the Slack approval request.\n * @param {string} options.slackResourcePath - The path to the Slack resource in Windmill.\n * @param {string} options.channelId - The Slack channel ID where the approval request will be sent.\n * @param {string} [options.message] - Optional custom message to include in the Slack approval request.\n * @param {string} [options.approver] - Optional user ID or name of the approver for the request.\n * @param {DefaultArgs} [options.defaultArgsJson] - Optional object defining or overriding the default arguments to a form field.\n * @param {Enums} [options.dynamicEnumsJson] - Optional object overriding the enum default values of an enum form field.\n * \n * @returns {Promise<void>} Resolves when the Slack approval request is successfully sent.\n * \n * @throws {Error} If the function is not called within a flow or flow preview.\n * @throws {Error} If the `JobService.getSlackApprovalPayload` call fails.\n * \n * **Usage Example:**\n * ```typescript\n * await requestInteractiveSlackApproval({\n * slackResourcePath: \"/u/alex/my_slack_resource\",\n * channelId: \"admins-slack-channel\",\n * message: \"Please approve this request\",\n * approver: \"approver123\",\n * defaultArgsJson: { key1: \"value1\", key2: 42 },\n * dynamicEnumsJson: { foo: [\"choice1\", \"choice2\"], bar: [\"optionA\", \"optionB\"] },\n * });\n * ```\n * \n * **Note:** This function requires execution within a Windmill flow or flow preview.\n */\nasync requestInteractiveSlackApproval({ slackResourcePath, channelId, message, approver, defaultArgsJson, dynamicEnumsJson, }: SlackApprovalOptions): Promise<void>\n\n/**\n * Sends an interactive approval request via Teams, allowing optional customization of the message, approver, and form fields.\n * \n * **[Enterprise Edition Only]** To include form fields in the Teams approval request, go to **Advanced -> Suspend -> Form**\n * and define a form. Learn more at [Windmill Documentation](https://www.windmill.dev/docs/flows/flow_approval#form).\n * \n * @param {Object} options - The configuration options for the Teams approval request.\n * @param {string} options.teamName - The Teams team name where the approval request will be sent.\n * @param {string} options.channelName - The Teams channel name where the approval request will be sent.\n * @param {string} [options.message] - Optional custom message to include in the Teams approval request.\n * @param {string} [options.approver] - Optional user ID or name of the approver for the request.\n * @param {DefaultArgs} [options.defaultArgsJson] - Optional object defining or overriding the default arguments to a form field.\n * @param {Enums} [options.dynamicEnumsJson] - Optional object overriding the enum default values of an enum form field.\n * \n * @returns {Promise<void>} Resolves when the Teams approval request is successfully sent.\n * \n * @throws {Error} If the function is not called within a flow or flow preview.\n * @throws {Error} If the `JobService.getTeamsApprovalPayload` call fails.\n * \n * **Usage Example:**\n * ```typescript\n * await requestInteractiveTeamsApproval({\n * teamName: \"admins-teams\",\n * channelName: \"admins-teams-channel\",\n * message: \"Please approve this request\",\n * approver: \"approver123\",\n * defaultArgsJson: { key1: \"value1\", key2: 42 },\n * dynamicEnumsJson: { foo: [\"choice1\", \"choice2\"], bar: [\"optionA\", \"optionB\"] },\n * });\n * ```\n * \n * **Note:** This function requires execution within a Windmill flow or flow preview.\n */\nasync requestInteractiveTeamsApproval({ teamName, channelName, message, approver, defaultArgsJson, dynamicEnumsJson, }: TeamsApprovalOptions): Promise<void>\n\n/**\n * Parse an S3 object from URI string or record format\n * @param s3Object - S3 object as URI string (s3://storage/key) or record\n * @returns S3 object record with storage and s3 key\n */\nparseS3Object(s3Object: S3Object): S3ObjectRecord\n\n/**\n * Create a SQL template function for PostgreSQL/datatable queries\n * @param name - Database/datatable name (default: \"main\")\n * @returns SQL template function for building parameterized queries\n * @example\n * let sql = wmill.datatable()\n * let name = 'Robin'\n * let age = 21\n * await sql`\n * SELECT * FROM friends\n * WHERE name = ${name} AND age = ${age}::int\n * `.fetch()\n */\ndatatable(name: string = \"main\"): SqlTemplateFunction\n\n/**\n * Create a SQL template function for DuckDB/ducklake queries\n * @param name - DuckDB database name (default: \"main\")\n * @returns SQL template function for building parameterized queries\n * @example\n * let sql = wmill.ducklake()\n * let name = 'Robin'\n * let age = 21\n * await sql`\n * SELECT * FROM friends\n * WHERE name = ${name} AND age = ${age}\n * `.fetch()\n */\nducklake(name: string = \"main\"): SqlTemplateFunction\n\nasync polarsConnectionSettings(s3_resource_path: string | undefined): Promise<any>\n\nasync duckdbConnectionSettings(s3_resource_path: string | undefined): Promise<any>\n\n\n# Python SDK (wmill)\n\nImport: import wmill\n\ndef get_mocked_api() -> Optional[dict]\n\n# Get the HTTP client instance.\n# \n# Returns:\n# Configured httpx.Client for API requests\ndef get_client() -> httpx.Client\n\n# Make an HTTP GET request to the Windmill API.\n# \n# Args:\n# endpoint: API endpoint path\n# raise_for_status: Whether to raise an exception on HTTP errors\n# **kwargs: Additional arguments passed to httpx.get\n# \n# Returns:\n# HTTP response object\ndef get(endpoint, raise_for_status = True, **kwargs) -> httpx.Response\n\n# Make an HTTP POST request to the Windmill API.\n# \n# Args:\n# endpoint: API endpoint path\n# raise_for_status: Whether to raise an exception on HTTP errors\n# **kwargs: Additional arguments passed to httpx.post\n# \n# Returns:\n# HTTP response object\ndef post(endpoint, raise_for_status = True, **kwargs) -> httpx.Response\n\n# Create a new authentication token.\n# \n# Args:\n# duration: Token validity duration (default: 1 day)\n# \n# Returns:\n# New authentication token string\ndef create_token(duration = dt.timedelta(days=1)) -> str\n\n# Create a script job and return its job id.\n# \n# .. deprecated:: Use run_script_by_path_async or run_script_by_hash_async instead.\ndef run_script_async(path: str = None, hash_: str = None, args: dict = None, scheduled_in_secs: int = None) -> str\n\n# Create a script job by path and return its job id.\ndef run_script_by_path_async(path: str, args: dict = None, scheduled_in_secs: int = None) -> str\n\n# Create a script job by hash and return its job id.\ndef run_script_by_hash_async(hash_: str, args: dict = None, scheduled_in_secs: int = None) -> str\n\n# Create a flow job and return its job id.\ndef run_flow_async(path: str, args: dict = None, scheduled_in_secs: int = None, do_not_track_in_parent: bool = True) -> str\n\n# Run script synchronously and return its result.\n# \n# .. deprecated:: Use run_script_by_path or run_script_by_hash instead.\ndef run_script(path: str = None, hash_: str = None, args: dict = None, timeout: dt.timedelta | int | float | None = None, verbose: bool = False, cleanup: bool = True, assert_result_is_not_none: bool = False) -> Any\n\n# Run script by path synchronously and return its result.\ndef run_script_by_path(path: str, args: dict = None, timeout: dt.timedelta | int | float | None = None, verbose: bool = False, cleanup: bool = True, assert_result_is_not_none: bool = False) -> Any\n\n# Run script by hash synchronously and return its result.\ndef run_script_by_hash(hash_: str, args: dict = None, timeout: dt.timedelta | int | float | None = None, verbose: bool = False, cleanup: bool = True, assert_result_is_not_none: bool = False) -> Any\n\n# Run a script on the current worker without creating a job\ndef run_inline_script_preview(content: str, language: str, args: dict = None) -> Any\n\n# Wait for a job to complete and return its result.\n# \n# Args:\n# job_id: ID of the job to wait for\n# timeout: Maximum time to wait (seconds or timedelta)\n# verbose: Enable verbose logging\n# cleanup: Register cleanup handler to cancel job on exit\n# assert_result_is_not_none: Raise exception if result is None\n# \n# Returns:\n# Job result when completed\n# \n# Raises:\n# TimeoutError: If timeout is reached\n# Exception: If job fails\ndef wait_job(job_id, timeout: dt.timedelta | int | float | None = None, verbose: bool = False, cleanup: bool = True, assert_result_is_not_none: bool = False)\n\n# Cancel a specific job by ID.\n# \n# Args:\n# job_id: UUID of the job to cancel\n# reason: Optional reason for cancellation\n# \n# Returns:\n# Response message from the cancel endpoint\ndef cancel_job(job_id: str, reason: str = None) -> str\n\n# Cancel currently running executions of the same script.\ndef cancel_running() -> dict\n\n# Get job details by ID.\n# \n# Args:\n# job_id: UUID of the job\n# \n# Returns:\n# Job details dictionary\ndef get_job(job_id: str) -> dict\n\n# Get the root job ID for a flow hierarchy.\n# \n# Args:\n# job_id: Job ID (defaults to current WM_JOB_ID)\n# \n# Returns:\n# Root job ID\ndef get_root_job_id(job_id: str | None = None) -> dict\n\n# Get an OIDC JWT token for authentication to external services.\n# \n# Args:\n# audience: Token audience (e.g., \"vault\", \"aws\")\n# expires_in: Optional expiration time in seconds\n# \n# Returns:\n# JWT token string\ndef get_id_token(audience: str, expires_in: int | None = None) -> str\n\n# Get the status of a job.\n# \n# Args:\n# job_id: UUID of the job\n# \n# Returns:\n# Job status: \"RUNNING\", \"WAITING\", or \"COMPLETED\"\ndef get_job_status(job_id: str) -> JobStatus\n\n# Get the result of a completed job.\n# \n# Args:\n# job_id: UUID of the completed job\n# assert_result_is_not_none: Raise exception if result is None\n# \n# Returns:\n# Job result\ndef get_result(job_id: str, assert_result_is_not_none: bool = True) -> Any\n\n# Get a variable value by path.\n# \n# Args:\n# path: Variable path in Windmill\n# \n# Returns:\n# Variable value as string\ndef get_variable(path: str) -> str\n\n# Set a variable value by path, creating it if it doesn't exist.\n# \n# Args:\n# path: Variable path in Windmill\n# value: Variable value to set\n# is_secret: Whether the variable should be secret (default: False)\ndef set_variable(path: str, value: str, is_secret: bool = False) -> None\n\n# Get a resource value by path.\n# \n# Args:\n# path: Resource path in Windmill\n# none_if_undefined: Return None instead of raising if not found\n# \n# Returns:\n# Resource value dictionary or None\ndef get_resource(path: str, none_if_undefined: bool = False) -> dict | None\n\n# Set a resource value by path, creating it if it doesn't exist.\n# \n# Args:\n# value: Resource value to set\n# path: Resource path in Windmill\n# resource_type: Resource type for creation\ndef set_resource(value: Any, path: str, resource_type: str)\n\n# List resources from Windmill workspace.\n# \n# Args:\n# resource_type: Optional resource type to filter by (e.g., \"postgresql\", \"mysql\", \"s3\")\n# page: Optional page number for pagination\n# per_page: Optional number of results per page\n# \n# Returns:\n# List of resource dictionaries\ndef list_resources(resource_type: str = None, page: int = None, per_page: int = None) -> list[dict]\n\n# Set the workflow state.\n# \n# Args:\n# value: State value to set\ndef set_state(value: Any)\n\n# Set job progress percentage (0-99).\n# \n# Args:\n# value: Progress percentage\n# job_id: Job ID (defaults to current WM_JOB_ID)\ndef set_progress(value: int, job_id: Optional[str] = None)\n\n# Get job progress percentage.\n# \n# Args:\n# job_id: Job ID (defaults to current WM_JOB_ID)\n# \n# Returns:\n# Progress value (0-100) or None if not set\ndef get_progress(job_id: Optional[str] = None) -> Any\n\n# Set the user state of a flow at a given key\ndef set_flow_user_state(key: str, value: Any) -> None\n\n# Get the user state of a flow at a given key\ndef get_flow_user_state(key: str) -> Any\n\n# Get the Windmill server version.\n# \n# Returns:\n# Version string\ndef version()\n\n# Convenient helpers that takes an S3 resource as input and returns the settings necessary to\n# initiate an S3 connection from DuckDB\ndef get_duckdb_connection_settings(s3_resource_path: str = '') -> DuckDbConnectionSettings | None\n\n# Convenient helpers that takes an S3 resource as input and returns the settings necessary to\n# initiate an S3 connection from Polars\ndef get_polars_connection_settings(s3_resource_path: str = '') -> PolarsConnectionSettings\n\n# Convenient helpers that takes an S3 resource as input and returns the settings necessary to\n# initiate an S3 connection using boto3\ndef get_boto3_connection_settings(s3_resource_path: str = '') -> Boto3ConnectionSettings\n\n# Load a file from the workspace s3 bucket and returns its content as bytes.\n# \n# '''python\n# from wmill import S3Object\n# \n# s3_obj = S3Object(s3=\"/path/to/my_file.txt\")\n# my_obj_content = client.load_s3_file(s3_obj)\n# file_content = my_obj_content.decode(\"utf-8\")\n# '''\ndef load_s3_file(s3object: S3Object | str, s3_resource_path: str | None) -> bytes\n\n# Load a file from the workspace s3 bucket and returns the bytes stream.\n# \n# '''python\n# from wmill import S3Object\n# \n# s3_obj = S3Object(s3=\"/path/to/my_file.txt\")\n# with wmill.load_s3_file_reader(s3object, s3_resource_path) as file_reader:\n# print(file_reader.read())\n# '''\ndef load_s3_file_reader(s3object: S3Object | str, s3_resource_path: str | None) -> BufferedReader\n\n# Write a file to the workspace S3 bucket\n# \n# '''python\n# from wmill import S3Object\n# \n# s3_obj = S3Object(s3=\"/path/to/my_file.txt\")\n# \n# # for an in memory bytes array:\n# file_content = b'Hello Windmill!'\n# client.write_s3_file(s3_obj, file_content)\n# \n# # for a file:\n# with open(\"my_file.txt\", \"rb\") as my_file:\n# client.write_s3_file(s3_obj, my_file)\n# '''\ndef write_s3_file(s3object: S3Object | str | None, file_content: BufferedReader | bytes, s3_resource_path: str | None, content_type: str | None = None, content_disposition: str | None = None) -> S3Object\n\n# Sign S3 objects for use by anonymous users in public apps.\n# \n# Args:\n# s3_objects: List of S3 objects to sign\n# \n# Returns:\n# List of signed S3 objects\ndef sign_s3_objects(s3_objects: list[S3Object | str]) -> list[S3Object]\n\n# Sign a single S3 object for use by anonymous users in public apps.\n# \n# Args:\n# s3_object: S3 object to sign\n# \n# Returns:\n# Signed S3 object\ndef sign_s3_object(s3_object: S3Object | str) -> S3Object\n\n# Generate presigned public URLs for an array of S3 objects.\n# If an S3 object is not signed yet, it will be signed first.\n# \n# Args:\n# s3_objects: List of S3 objects to sign\n# base_url: Optional base URL for the presigned URLs (defaults to WM_BASE_URL)\n# \n# Returns:\n# List of signed public URLs\n# \n# Example:\n# >>> s3_objs = [S3Object(s3=\"/path/to/file1.txt\"), S3Object(s3=\"/path/to/file2.txt\")]\n# >>> urls = client.get_presigned_s3_public_urls(s3_objs)\ndef get_presigned_s3_public_urls(s3_objects: list[S3Object | str], base_url: str | None = None) -> list[str]\n\n# Generate a presigned public URL for an S3 object.\n# If the S3 object is not signed yet, it will be signed first.\n# \n# Args:\n# s3_object: S3 object to sign\n# base_url: Optional base URL for the presigned URL (defaults to WM_BASE_URL)\n# \n# Returns:\n# Signed public URL\n# \n# Example:\n# >>> s3_obj = S3Object(s3=\"/path/to/file.txt\")\n# >>> url = client.get_presigned_s3_public_url(s3_obj)\ndef get_presigned_s3_public_url(s3_object: S3Object | str, base_url: str | None = None) -> str\n\n# Get the current user information.\n# \n# Returns:\n# User details dictionary\ndef whoami() -> dict\n\n# Get the current user information (alias for whoami).\n# \n# Returns:\n# User details dictionary\ndef user() -> dict\n\n# Get the state resource path from environment.\n# \n# Returns:\n# State path string\ndef state_path() -> str\n\n# Get the workflow state.\n# \n# Returns:\n# State value or None if not set\ndef state() -> Any\n\n# Set the state in the shared folder using pickle\ndef set_shared_state_pickle(value: Any, path: str = 'state.pickle') -> None\n\n# Get the state in the shared folder using pickle\ndef get_shared_state_pickle(path: str = 'state.pickle') -> Any\n\n# Set the state in the shared folder using pickle\ndef set_shared_state(value: Any, path: str = 'state.json') -> None\n\n# Get the state in the shared folder using pickle\ndef get_shared_state(path: str = 'state.json') -> None\n\n# Get URLs needed for resuming a flow after suspension.\n# \n# Args:\n# approver: Optional approver name\n# \n# Returns:\n# Dictionary with approvalPage, resume, and cancel URLs\ndef get_resume_urls(approver: str = None) -> dict\n\n# Sends an interactive approval request via Slack, allowing optional customization of the message, approver, and form fields.\n# \n# **[Enterprise Edition Only]** To include form fields in the Slack approval request, use the \"Advanced -> Suspend -> Form\" functionality.\n# Learn more at: https://www.windmill.dev/docs/flows/flow_approval#form\n# \n# :param slack_resource_path: The path to the Slack resource in Windmill.\n# :type slack_resource_path: str\n# :param channel_id: The Slack channel ID where the approval request will be sent.\n# :type channel_id: str\n# :param message: Optional custom message to include in the Slack approval request.\n# :type message: str, optional\n# :param approver: Optional user ID or name of the approver for the request.\n# :type approver: str, optional\n# :param default_args_json: Optional dictionary defining or overriding the default arguments for form fields.\n# :type default_args_json: dict, optional\n# :param dynamic_enums_json: Optional dictionary overriding the enum default values of enum form fields.\n# :type dynamic_enums_json: dict, optional\n# \n# :raises Exception: If the function is not called within a flow or flow preview.\n# :raises Exception: If the required flow job or flow step environment variables are not set.\n# \n# :return: None\n# \n# **Usage Example:**\n# >>> client.request_interactive_slack_approval(\n# ... slack_resource_path=\"/u/alex/my_slack_resource\",\n# ... channel_id=\"admins-slack-channel\",\n# ... message=\"Please approve this request\",\n# ... approver=\"approver123\",\n# ... default_args_json={\"key1\": \"value1\", \"key2\": 42},\n# ... dynamic_enums_json={\"foo\": [\"choice1\", \"choice2\"], \"bar\": [\"optionA\", \"optionB\"]},\n# ... )\n# \n# **Notes:**\n# - This function must be executed within a Windmill flow or flow preview.\n# - The function checks for required environment variables (`WM_FLOW_JOB_ID`, `WM_FLOW_STEP_ID`) to ensure it is run in the appropriate context.\ndef request_interactive_slack_approval(slack_resource_path: str, channel_id: str, message: str = None, approver: str = None, default_args_json: dict = None, dynamic_enums_json: dict = None) -> None\n\n# Get email from workspace username\n# This method is particularly useful for apps that require the email address of the viewer.\n# Indeed, in the viewer context WM_USERNAME is set to the username of the viewer but WM_EMAIL is set to the email of the creator of the app.\ndef username_to_email(username: str) -> str\n\n# Send a message to a Microsoft Teams conversation with conversation_id, where success is used to style the message\ndef send_teams_message(conversation_id: str, text: str, success: bool = True, card_block: dict = None)\n\n# Get a DataTable client for SQL queries.\n# \n# Args:\n# name: Database name (default: \"main\")\n# \n# Returns:\n# DataTableClient instance\ndef datatable(name: str = 'main')\n\n# Get a DuckLake client for DuckDB queries.\n# \n# Args:\n# name: Database name (default: \"main\")\n# \n# Returns:\n# DucklakeClient instance\ndef ducklake(name: str = 'main')\n\ndef init_global_client(f)\n\ndef deprecate(in_favor_of: str)\n\n# Get the current workspace ID.\n# \n# Returns:\n# Workspace ID string\ndef get_workspace() -> str\n\ndef get_version() -> str\n\n# Run a script synchronously by hash and return its result.\n# \n# Args:\n# hash: Script hash\n# args: Script arguments\n# verbose: Enable verbose logging\n# assert_result_is_not_none: Raise exception if result is None\n# cleanup: Register cleanup handler to cancel job on exit\n# timeout: Maximum time to wait\n# \n# Returns:\n# Script result\ndef run_script_sync(hash: str, args: Dict[str, Any] = None, verbose: bool = False, assert_result_is_not_none: bool = True, cleanup: bool = True, timeout: dt.timedelta = None) -> Any\n\n# Run a script synchronously by path and return its result.\n# \n# Args:\n# path: Script path\n# args: Script arguments\n# verbose: Enable verbose logging\n# assert_result_is_not_none: Raise exception if result is None\n# cleanup: Register cleanup handler to cancel job on exit\n# timeout: Maximum time to wait\n# \n# Returns:\n# Script result\ndef run_script_by_path_sync(path: str, args: Dict[str, Any] = None, verbose: bool = False, assert_result_is_not_none: bool = True, cleanup: bool = True, timeout: dt.timedelta = None) -> Any\n\n# Convenient helpers that takes an S3 resource as input and returns the settings necessary to\n# initiate an S3 connection from DuckDB\ndef duckdb_connection_settings(s3_resource_path: str = '') -> DuckDbConnectionSettings\n\n# Convenient helpers that takes an S3 resource as input and returns the settings necessary to\n# initiate an S3 connection from Polars\ndef polars_connection_settings(s3_resource_path: str = '') -> PolarsConnectionSettings\n\n# Convenient helpers that takes an S3 resource as input and returns the settings necessary to\n# initiate an S3 connection using boto3\ndef boto3_connection_settings(s3_resource_path: str = '') -> Boto3ConnectionSettings\n\n# Get the state\ndef get_state() -> Any\n\n# Get the state resource path from environment.\n# \n# Returns:\n# State path string\ndef get_state_path() -> str\n\n# Decorator to mark a function as a workflow task.\n# \n# When executed inside a Windmill job, the decorated function runs as a\n# separate workflow step. Outside Windmill, it executes normally.\n# \n# Args:\n# tag: Optional worker tag for execution\n# \n# Returns:\n# Decorated function\ndef task(*args, **kwargs)\n\n# Parse resource syntax from string.\ndef parse_resource_syntax(s: str) -> Optional[str]\n\n# Parse S3 object from string or S3Object format.\ndef parse_s3_object(s3_object: S3Object | str) -> S3Object\n\n# Parse variable syntax from string.\ndef parse_variable_syntax(s: str) -> Optional[str]\n\n# Append a text to the result stream.\n# \n# Args:\n# text: text to append to the result stream\ndef append_to_result_stream(text: str) -> None\n\n# Stream to the result stream.\n# \n# Args:\n# stream: stream to stream to the result stream\ndef stream_result(stream) -> None\n\n# Execute a SQL query against the DataTable.\n# \n# Args:\n# sql: SQL query string with $1, $2, etc. placeholders\n# *args: Positional arguments to bind to query placeholders\n# \n# Returns:\n# SqlQuery instance for fetching results\ndef query(sql: str, *args)\n\n# Execute query and fetch results.\n# \n# Args:\n# result_collection: Optional result collection mode\n# \n# Returns:\n# Query results\ndef fetch(result_collection: str | None = None)\n\n# Execute query and fetch first row of results.\n# \n# Returns:\n# First row of query results\ndef fetch_one()\n\n# DuckDB executor requires explicit argument types at declaration\n# These types exist in both DuckDB and Postgres\n# Check that the types exist if you plan to extend this function for other SQL engines.\ndef infer_sql_type(value) -> str\n\n\n";
1
+ export declare const SCRIPT_GUIDANCE = "\nEach script should be placed in a folder. Ask the user in which folder he wants the script to be located at before starting coding.\nAfter writing a script, you do not need to create .lock and .yaml files manually. Instead, you can run `wmill script generate-metadata` bash command. This command takes no arguments. After writing the script, you can ask the user if he wants to push the script with `wmill sync push`. Both should be run at the root of the repository.\n\nYou can use `wmill resource-type list --schema` to list all resource types available. You should use that to know the type of the resource you need to use in your script. You can use grep if the output is too long.\n\n# Windmill Script Writing Guide\n\n## General Principles\n\n- Scripts must export a main function (do not call it)\n- Libraries are installed automatically - do not show installation instructions\n- Credentials and configuration are stored in resources and passed as parameters\n- The windmill client (`wmill`) provides APIs for interacting with the platform\n\n## Function Naming\n\n- Main function: `main` (or `preprocessor` for preprocessor scripts)\n- Must be async for TypeScript variants\n\n## Return Values\n\n- Scripts can return any JSON-serializable value\n- Return values become available to subsequent flow steps via `results.step_id`\n\n## Preprocessor Scripts\n\nPreprocessor scripts process raw trigger data from various sources (webhook, custom HTTP route, SQS, WebSocket, Kafka, NATS, MQTT, Postgres, or email) before passing it to the flow. This separates the trigger logic from the flow logic and keeps the auto-generated UI clean.\n\nThe returned object determines the parameter values passed to the flow.\ne.g., `{ b: 1, a: 2 }` calls the flow with `a = 2` and `b = 1`, assuming the flow has two inputs called `a` and `b`.\n\nThe preprocessor receives a single parameter called `event`.\n\n\n# Bash\n\n## Structure\n\nDo not include `#!/bin/bash`. Arguments are obtained as positional parameters:\n\n```bash\n# Get arguments\nvar1=\"$1\"\nvar2=\"$2\"\n\necho \"Processing $var1 and $var2\"\n\n# Return JSON by echoing to stdout\necho \"{\\\"result\\\": \\\"$var1\\\", \\\"count\\\": $var2}\"\n```\n\n**Important:**\n- Do not include shebang (`#!/bin/bash`)\n- Arguments are always strings\n- Access with `$1`, `$2`, etc.\n\n## Output\n\nThe script output is captured as the result. For structured data, output valid JSON:\n\n```bash\nname=\"$1\"\ncount=\"$2\"\n\n# Output JSON result\ncat << EOF\n{\n \"name\": \"$name\",\n \"count\": $count,\n \"timestamp\": \"$(date -Iseconds)\"\n}\nEOF\n```\n\n## Environment Variables\n\nEnvironment variables set in Windmill are available:\n\n```bash\n# Access environment variable\necho \"Workspace: $WM_WORKSPACE\"\necho \"Job ID: $WM_JOB_ID\"\n```\n\n\n# BigQuery\n\nArguments use `@name` syntax.\n\nName the parameters by adding comments before the statement:\n\n```sql\n-- @name1 (string)\n-- @name2 (int64) = 0\nSELECT * FROM users WHERE name = @name1 AND age > @name2;\n```\n\n\n# TypeScript (Bun)\n\nBun runtime with full npm ecosystem and fastest execution.\n\n## Structure\n\nExport a single **async** function called `main`:\n\n```typescript\nexport async function main(param1: string, param2: number) {\n // Your code here\n return { result: param1, count: param2 };\n}\n```\n\nDo not call the main function. Libraries are installed automatically.\n\n## Resource Types\n\nOn Windmill, credentials and configuration are stored in resources and passed as parameters to main.\n\nUse the `RT` namespace for resource types:\n\n```typescript\nexport async function main(stripe: RT.Stripe) {\n // stripe contains API key and config from the resource\n}\n```\n\nOnly use resource types if you need them to satisfy the instructions. Always use the RT namespace.\n\n## Imports\n\n```typescript\nimport Stripe from \"stripe\";\nimport { someFunction } from \"some-package\";\n```\n\n## Windmill Client\n\nImport the windmill client for platform interactions:\n\n```typescript\nimport * as wmill from \"windmill-client\";\n```\n\nSee the SDK documentation for available methods.\n\n## Preprocessor Scripts\n\nFor preprocessor scripts, the function should be named `preprocessor` and receives an `event` parameter:\n\n```typescript\ntype Event = {\n kind:\n | \"webhook\"\n | \"http\"\n | \"websocket\"\n | \"kafka\"\n | \"email\"\n | \"nats\"\n | \"postgres\"\n | \"sqs\"\n | \"mqtt\"\n | \"gcp\";\n body: any;\n headers: Record<string, string>;\n query: Record<string, string>;\n};\n\nexport async function preprocessor(event: Event) {\n return {\n param1: event.body.field1,\n param2: event.query.id,\n };\n}\n```\n\n## S3 Object Operations\n\nWindmill provides built-in support for S3-compatible storage operations.\n\n### S3Object Type\n\nThe S3Object type represents a file in S3 storage:\n\n```typescript\ntype S3Object = {\n s3: string; // Path within the bucket\n};\n```\n\n## TypeScript Operations\n\n```typescript\nimport * as wmill from \"windmill-client\";\n\n// Load file content from S3\nconst content: Uint8Array = await wmill.loadS3File(s3object);\n\n// Load file as stream\nconst blob: Blob = await wmill.loadS3FileStream(s3object);\n\n// Write file to S3\nconst result: S3Object = await wmill.writeS3File(\n s3object, // Target path (or undefined to auto-generate)\n fileContent, // string or Blob\n s3ResourcePath // Optional: specific S3 resource to use\n);\n```\n\n\n# TypeScript (Bun Native)\n\nNative TypeScript execution with fetch only - no external imports allowed.\n\n## Structure\n\nExport a single **async** function called `main`:\n\n```typescript\nexport async function main(param1: string, param2: number) {\n // Your code here\n return { result: param1, count: param2 };\n}\n```\n\nDo not call the main function.\n\n## Resource Types\n\nOn Windmill, credentials and configuration are stored in resources and passed as parameters to main.\n\nUse the `RT` namespace for resource types:\n\n```typescript\nexport async function main(stripe: RT.Stripe) {\n // stripe contains API key and config from the resource\n}\n```\n\nOnly use resource types if you need them to satisfy the instructions. Always use the RT namespace.\n\n## Imports\n\n**No imports allowed.** Use the globally available `fetch` function:\n\n```typescript\nexport async function main(url: string) {\n const response = await fetch(url);\n return await response.json();\n}\n```\n\n## Windmill Client\n\nThe windmill client is not available in native TypeScript mode. Use fetch to call APIs directly.\n\n## Preprocessor Scripts\n\nFor preprocessor scripts, the function should be named `preprocessor` and receives an `event` parameter:\n\n```typescript\ntype Event = {\n kind:\n | \"webhook\"\n | \"http\"\n | \"websocket\"\n | \"kafka\"\n | \"email\"\n | \"nats\"\n | \"postgres\"\n | \"sqs\"\n | \"mqtt\"\n | \"gcp\";\n body: any;\n headers: Record<string, string>;\n query: Record<string, string>;\n};\n\nexport async function preprocessor(event: Event) {\n return {\n param1: event.body.field1,\n param2: event.query.id,\n };\n}\n```\n\n## S3 Object Operations\n\nWindmill provides built-in support for S3-compatible storage operations.\n\n### S3Object Type\n\nThe S3Object type represents a file in S3 storage:\n\n```typescript\ntype S3Object = {\n s3: string; // Path within the bucket\n};\n```\n\n## TypeScript Operations\n\n```typescript\nimport * as wmill from \"windmill-client\";\n\n// Load file content from S3\nconst content: Uint8Array = await wmill.loadS3File(s3object);\n\n// Load file as stream\nconst blob: Blob = await wmill.loadS3FileStream(s3object);\n\n// Write file to S3\nconst result: S3Object = await wmill.writeS3File(\n s3object, // Target path (or undefined to auto-generate)\n fileContent, // string or Blob\n s3ResourcePath // Optional: specific S3 resource to use\n);\n```\n\n\n# C#\n\nThe script must contain a public static `Main` method inside a class:\n\n```csharp\npublic class Script\n{\n public static object Main(string name, int count)\n {\n return new { Name = name, Count = count };\n }\n}\n```\n\n**Important:**\n- Class name is irrelevant\n- Method must be `public static`\n- Return type can be `object` or specific type\n\n## NuGet Packages\n\nAdd packages using the `#r` directive at the top:\n\n```csharp\n#r \"nuget: Newtonsoft.Json, 13.0.3\"\n#r \"nuget: RestSharp, 110.2.0\"\n\nusing Newtonsoft.Json;\nusing RestSharp;\n\npublic class Script\n{\n public static object Main(string url)\n {\n var client = new RestClient(url);\n var request = new RestRequest();\n var response = client.Get(request);\n return JsonConvert.DeserializeObject(response.Content);\n }\n}\n```\n\n\n# TypeScript (Deno)\n\nDeno runtime with npm support via `npm:` prefix and native Deno libraries.\n\n## Structure\n\nExport a single **async** function called `main`:\n\n```typescript\nexport async function main(param1: string, param2: number) {\n // Your code here\n return { result: param1, count: param2 };\n}\n```\n\nDo not call the main function. Libraries are installed automatically.\n\n## Resource Types\n\nOn Windmill, credentials and configuration are stored in resources and passed as parameters to main.\n\nUse the `RT` namespace for resource types:\n\n```typescript\nexport async function main(stripe: RT.Stripe) {\n // stripe contains API key and config from the resource\n}\n```\n\nOnly use resource types if you need them to satisfy the instructions. Always use the RT namespace.\n\n## Imports\n\n```typescript\n// npm packages use npm: prefix\nimport Stripe from \"npm:stripe\";\nimport { someFunction } from \"npm:some-package\";\n\n// Deno standard library\nimport { serve } from \"https://deno.land/std/http/server.ts\";\n```\n\n## Windmill Client\n\nImport the windmill client for platform interactions:\n\n```typescript\nimport * as wmill from \"windmill-client\";\n```\n\nSee the SDK documentation for available methods.\n\n## Preprocessor Scripts\n\nFor preprocessor scripts, the function should be named `preprocessor` and receives an `event` parameter:\n\n```typescript\ntype Event = {\n kind:\n | \"webhook\"\n | \"http\"\n | \"websocket\"\n | \"kafka\"\n | \"email\"\n | \"nats\"\n | \"postgres\"\n | \"sqs\"\n | \"mqtt\"\n | \"gcp\";\n body: any;\n headers: Record<string, string>;\n query: Record<string, string>;\n};\n\nexport async function preprocessor(event: Event) {\n return {\n param1: event.body.field1,\n param2: event.query.id,\n };\n}\n```\n\n## S3 Object Operations\n\nWindmill provides built-in support for S3-compatible storage operations.\n\n### S3Object Type\n\nThe S3Object type represents a file in S3 storage:\n\n```typescript\ntype S3Object = {\n s3: string; // Path within the bucket\n};\n```\n\n## TypeScript Operations\n\n```typescript\nimport * as wmill from \"windmill-client\";\n\n// Load file content from S3\nconst content: Uint8Array = await wmill.loadS3File(s3object);\n\n// Load file as stream\nconst blob: Blob = await wmill.loadS3FileStream(s3object);\n\n// Write file to S3\nconst result: S3Object = await wmill.writeS3File(\n s3object, // Target path (or undefined to auto-generate)\n fileContent, // string or Blob\n s3ResourcePath // Optional: specific S3 resource to use\n);\n```\n\n\n# DuckDB\n\nArguments are defined with comments and used with `$name` syntax:\n\n```sql\n-- $name (text) = default\n-- $age (integer)\nSELECT * FROM users WHERE name = $name AND age > $age;\n```\n\n## Ducklake Integration\n\nAttach Ducklake for data lake operations:\n\n```sql\n-- Main ducklake\nATTACH 'ducklake' AS dl;\n\n-- Named ducklake\nATTACH 'ducklake://my_lake' AS dl;\n\n-- Then query\nSELECT * FROM dl.schema.table;\n```\n\n## External Database Connections\n\nConnect to external databases using resources:\n\n```sql\nATTACH '$res:path/to/resource' AS db (TYPE postgres);\nSELECT * FROM db.schema.table;\n```\n\n## S3 File Operations\n\nRead files from S3 storage:\n\n```sql\n-- Default storage\nSELECT * FROM read_csv('s3:///path/to/file.csv');\n\n-- Named storage\nSELECT * FROM read_csv('s3://storage_name/path/to/file.csv');\n\n-- Parquet files\nSELECT * FROM read_parquet('s3:///path/to/file.parquet');\n\n-- JSON files\nSELECT * FROM read_json('s3:///path/to/file.json');\n```\n\n\n# Go\n\n## Structure\n\nThe file package must be `inner` and export a function called `main`:\n\n```go\npackage inner\n\nfunc main(param1 string, param2 int) (map[string]interface{}, error) {\n return map[string]interface{}{\n \"result\": param1,\n \"count\": param2,\n }, nil\n}\n```\n\n**Important:**\n- Package must be `inner`\n- Return type must be `({return_type}, error)`\n- Function name is `main` (lowercase)\n\n## Return Types\n\nThe return type can be any Go type that can be serialized to JSON:\n\n```go\npackage inner\n\ntype Result struct {\n Name string `json:\"name\"`\n Count int `json:\"count\"`\n}\n\nfunc main(name string, count int) (Result, error) {\n return Result{\n Name: name,\n Count: count,\n }, nil\n}\n```\n\n## Error Handling\n\nReturn errors as the second return value:\n\n```go\npackage inner\n\nimport \"errors\"\n\nfunc main(value int) (string, error) {\n if value < 0 {\n return \"\", errors.New(\"value must be positive\")\n }\n return \"success\", nil\n}\n```\n\n\n# GraphQL\n\n## Structure\n\nWrite GraphQL queries or mutations. Arguments can be added as query parameters:\n\n```graphql\nquery GetUser($id: ID!) {\n user(id: $id) {\n id\n name\n email\n }\n}\n```\n\n## Variables\n\nVariables are passed as script arguments and automatically bound to the query:\n\n```graphql\nquery SearchProducts($query: String!, $limit: Int = 10) {\n products(search: $query, first: $limit) {\n edges {\n node {\n id\n name\n price\n }\n }\n }\n}\n```\n\n## Mutations\n\n```graphql\nmutation CreateUser($input: CreateUserInput!) {\n createUser(input: $input) {\n id\n name\n createdAt\n }\n}\n```\n\n\n# Java\n\nThe script must contain a Main public class with a `public static main()` method:\n\n```java\npublic class Main {\n public static Object main(String name, int count) {\n java.util.Map<String, Object> result = new java.util.HashMap<>();\n result.put(\"name\", name);\n result.put(\"count\", count);\n return result;\n }\n}\n```\n\n**Important:**\n- Class must be named `Main`\n- Method must be `public static Object main(...)`\n- Return type is `Object` or `void`\n\n## Maven Dependencies\n\nAdd dependencies using comments at the top:\n\n```java\n//requirements:\n//com.google.code.gson:gson:2.10.1\n//org.apache.httpcomponents:httpclient:4.5.14\n\nimport com.google.gson.Gson;\n\npublic class Main {\n public static Object main(String input) {\n Gson gson = new Gson();\n return gson.fromJson(input, Object.class);\n }\n}\n```\n\n\n# Microsoft SQL Server (MSSQL)\n\nArguments use `@P1`, `@P2`, etc.\n\nName the parameters by adding comments before the statement:\n\n```sql\n-- @P1 name1 (varchar)\n-- @P2 name2 (int) = 0\nSELECT * FROM users WHERE name = @P1 AND age > @P2;\n```\n\n\n# MySQL\n\nArguments use `?` placeholders.\n\nName the parameters by adding comments before the statement:\n\n```sql\n-- ? name1 (text)\n-- ? name2 (int) = 0\nSELECT * FROM users WHERE name = ? AND age > ?;\n```\n\n\n# TypeScript (Native)\n\nNative TypeScript execution with fetch only - no external imports allowed.\n\n## Structure\n\nExport a single **async** function called `main`:\n\n```typescript\nexport async function main(param1: string, param2: number) {\n // Your code here\n return { result: param1, count: param2 };\n}\n```\n\nDo not call the main function.\n\n## Resource Types\n\nOn Windmill, credentials and configuration are stored in resources and passed as parameters to main.\n\nUse the `RT` namespace for resource types:\n\n```typescript\nexport async function main(stripe: RT.Stripe) {\n // stripe contains API key and config from the resource\n}\n```\n\nOnly use resource types if you need them to satisfy the instructions. Always use the RT namespace.\n\n## Imports\n\n**No imports allowed.** Use the globally available `fetch` function:\n\n```typescript\nexport async function main(url: string) {\n const response = await fetch(url);\n return await response.json();\n}\n```\n\n## Windmill Client\n\nThe windmill client is not available in native TypeScript mode. Use fetch to call APIs directly.\n\n## Preprocessor Scripts\n\nFor preprocessor scripts, the function should be named `preprocessor` and receives an `event` parameter:\n\n```typescript\ntype Event = {\n kind:\n | \"webhook\"\n | \"http\"\n | \"websocket\"\n | \"kafka\"\n | \"email\"\n | \"nats\"\n | \"postgres\"\n | \"sqs\"\n | \"mqtt\"\n | \"gcp\";\n body: any;\n headers: Record<string, string>;\n query: Record<string, string>;\n};\n\nexport async function preprocessor(event: Event) {\n return {\n param1: event.body.field1,\n param2: event.query.id\n };\n}\n```\n\n\n# PHP\n\n## Structure\n\nThe script must start with `<?php` and contain at least one function called `main`:\n\n```php\n<?php\n\nfunction main(string $param1, int $param2) {\n return [\"result\" => $param1, \"count\" => $param2];\n}\n```\n\n## Resource Types\n\nOn Windmill, credentials and configuration are stored in resources and passed as parameters to main.\n\nYou need to **redefine** the type of the resources that are needed before the main function. Always check if the class already exists using `class_exists`:\n\n```php\n<?php\n\nif (!class_exists('Postgresql')) {\n class Postgresql {\n public string $host;\n public int $port;\n public string $user;\n public string $password;\n public string $dbname;\n }\n}\n\nfunction main(Postgresql $db) {\n // $db contains the database connection details\n}\n```\n\nThe resource type name has to be exactly as specified.\n\n## Library Dependencies\n\nSpecify library dependencies as comments before the main function:\n\n```php\n<?php\n\n// require:\n// guzzlehttp/guzzle\n// stripe/stripe-php@^10.0\n\nfunction main() {\n // Libraries are available\n}\n```\n\nOne dependency per line. No need to require autoload, it is already done.\n\n\n# PostgreSQL\n\nArguments are obtained directly in the statement with `$1::{type}`, `$2::{type}`, etc.\n\nName the parameters by adding comments at the beginning of the script (without specifying the type):\n\n```sql\n-- $1 name1\n-- $2 name2 = default_value\nSELECT * FROM users WHERE name = $1::TEXT AND age > $2::INT;\n```\n\n\n# PowerShell\n\n## Structure\n\nArguments are obtained by calling the `param` function on the first line:\n\n```powershell\nparam($Name, $Count = 0, [int]$Age)\n\n# Your code here\nWrite-Output \"Processing $Name, count: $Count, age: $Age\"\n\n# Return object\n@{\n name = $Name\n count = $Count\n age = $Age\n}\n```\n\n## Parameter Types\n\nYou can specify types for parameters:\n\n```powershell\nparam(\n [string]$Name,\n [int]$Count = 0,\n [bool]$Enabled = $true,\n [array]$Items\n)\n\n@{\n name = $Name\n count = $Count\n enabled = $Enabled\n items = $Items\n}\n```\n\n## Return Values\n\nReturn values by outputting them at the end of the script:\n\n```powershell\nparam($Input)\n\n$result = @{\n processed = $true\n data = $Input\n timestamp = Get-Date -Format \"o\"\n}\n\n$result\n```\n\n\n# Python\n\n## Structure\n\nThe script must contain at least one function called `main`:\n\n```python\ndef main(param1: str, param2: int):\n # Your code here\n return {\"result\": param1, \"count\": param2}\n```\n\nDo not call the main function. Libraries are installed automatically.\n\n## Resource Types\n\nOn Windmill, credentials and configuration are stored in resources and passed as parameters to main.\n\nYou need to **redefine** the type of the resources that are needed before the main function as TypedDict:\n\n```python\nfrom typing import TypedDict\n\nclass postgresql(TypedDict):\n host: str\n port: int\n user: str\n password: str\n dbname: str\n\ndef main(db: postgresql):\n # db contains the database connection details\n pass\n```\n\n**Important rules:**\n\n- The resource type name must be **IN LOWERCASE**\n- Only include resource types if they are actually needed\n- If an import conflicts with a resource type name, **rename the imported object, not the type name**\n- Make sure to import TypedDict from typing **if you're using it**\n\n## Imports\n\nLibraries are installed automatically. Do not show installation instructions.\n\n```python\nimport requests\nimport pandas as pd\nfrom datetime import datetime\n```\n\nIf an import name conflicts with a resource type:\n\n```python\n# Wrong - don't rename the type\nimport stripe as stripe_lib\nclass stripe_type(TypedDict): ...\n\n# Correct - rename the import\nimport stripe as stripe_sdk\nclass stripe(TypedDict):\n api_key: str\n```\n\n## Windmill Client\n\nImport the windmill client for platform interactions:\n\n```python\nimport wmill\n```\n\nSee the SDK documentation for available methods.\n\n## Preprocessor Scripts\n\nFor preprocessor scripts, the function should be named `preprocessor` and receives an `event` parameter:\n\n```python\nfrom typing import TypedDict, Literal, Any\n\nclass Event(TypedDict):\n kind: Literal[\"webhook\", \"http\", \"websocket\", \"kafka\", \"email\", \"nats\", \"postgres\", \"sqs\", \"mqtt\", \"gcp\"]\n body: Any\n headers: dict[str, str]\n query: dict[str, str]\n\ndef preprocessor(event: Event):\n # Transform the event into flow input parameters\n return {\n \"param1\": event[\"body\"][\"field1\"],\n \"param2\": event[\"query\"][\"id\"]\n }\n```\n\n## S3 Object Operations\n\nWindmill provides built-in support for S3-compatible storage operations.\n\n```python\nimport wmill\n\n# Load file content from S3\ncontent: bytes = wmill.load_s3_file(s3object)\n\n# Load file as stream reader\nreader: BufferedReader = wmill.load_s3_file_reader(s3object)\n\n# Write file to S3\nresult: S3Object = wmill.write_s3_file(\n s3object, # Target path (or None to auto-generate)\n file_content, # bytes or BufferedReader\n s3_resource_path, # Optional: specific S3 resource\n content_type, # Optional: MIME type\n content_disposition # Optional: Content-Disposition header\n)\n```\n\n\n# Rust\n\n## Structure\n\nThe script must contain a function called `main` with proper return type:\n\n```rust\nuse anyhow::anyhow;\nuse serde::Serialize;\n\n#[derive(Serialize, Debug)]\nstruct ReturnType {\n result: String,\n count: i32,\n}\n\nfn main(param1: String, param2: i32) -> anyhow::Result<ReturnType> {\n Ok(ReturnType {\n result: param1,\n count: param2,\n })\n}\n```\n\n**Important:**\n- Arguments should be owned types\n- Return type must be serializable (`#[derive(Serialize)]`)\n- Return type is `anyhow::Result<T>`\n\n## Dependencies\n\nPackages must be specified with a partial cargo.toml at the beginning of the script:\n\n```rust\n//! ```cargo\n//! [dependencies]\n//! anyhow = \"1.0.86\"\n//! reqwest = { version = \"0.11\", features = [\"json\"] }\n//! tokio = { version = \"1\", features = [\"full\"] }\n//! ```\n\nuse anyhow::anyhow;\n// ... rest of the code\n```\n\n**Note:** Serde is already included, no need to add it again.\n\n## Async Functions\n\nIf you need to handle async functions (e.g., using tokio), keep the main function sync and create the runtime inside:\n\n```rust\n//! ```cargo\n//! [dependencies]\n//! anyhow = \"1.0.86\"\n//! tokio = { version = \"1\", features = [\"full\"] }\n//! reqwest = { version = \"0.11\", features = [\"json\"] }\n//! ```\n\nuse anyhow::anyhow;\nuse serde::Serialize;\n\n#[derive(Serialize, Debug)]\nstruct Response {\n data: String,\n}\n\nfn main(url: String) -> anyhow::Result<Response> {\n let rt = tokio::runtime::Runtime::new()?;\n rt.block_on(async {\n let resp = reqwest::get(&url).await?.text().await?;\n Ok(Response { data: resp })\n })\n}\n```\n\n\n# Snowflake\n\nArguments use `?` placeholders.\n\nName the parameters by adding comments before the statement:\n\n```sql\n-- ? name1 (text)\n-- ? name2 (number) = 0\nSELECT * FROM users WHERE name = ? AND age > ?;\n```\n\n\n# TypeScript SDK (windmill-client)\n\nImport: import * as wmill from 'windmill-client'\n\n/**\n * Initialize the Windmill client with authentication token and base URL\n * @param token - Authentication token (defaults to WM_TOKEN env variable)\n * @param baseUrl - API base URL (defaults to BASE_INTERNAL_URL or BASE_URL env variable)\n */\nsetClient(token?: string, baseUrl?: string): void\n\n/**\n * Create a client configuration from env variables\n * @returns client configuration\n */\ngetWorkspace(): string\n\n/**\n * Get a resource value by path\n * @param path path of the resource, default to internal state path\n * @param undefinedIfEmpty if the resource does not exist, return undefined instead of throwing an error\n * @returns resource value\n */\nasync getResource(path?: string, undefinedIfEmpty?: boolean): Promise<any>\n\n/**\n * Get the true root job id\n * @param jobId job id to get the root job id from (default to current job)\n * @returns root job id\n */\nasync getRootJobId(jobId?: string): Promise<string>\n\n/**\n * @deprecated Use runScriptByPath or runScriptByHash instead\n */\nasync runScript(path: string | null = null, hash_: string | null = null, args: Record<string, any> | null = null, verbose: boolean = false): Promise<any>\n\n/**\n * Run a script synchronously by its path and wait for the result\n * @param path - Script path in Windmill\n * @param args - Arguments to pass to the script\n * @param verbose - Enable verbose logging\n * @returns Script execution result\n */\nasync runScriptByPath(path: string, args: Record<string, any> | null = null, verbose: boolean = false): Promise<any>\n\n/**\n * Run a script synchronously by its hash and wait for the result\n * @param hash_ - Script hash in Windmill\n * @param args - Arguments to pass to the script\n * @param verbose - Enable verbose logging\n * @returns Script execution result\n */\nasync runScriptByHash(hash_: string, args: Record<string, any> | null = null, verbose: boolean = false): Promise<any>\n\n/**\n * Append a text to the result stream\n * @param text text to append to the result stream\n */\nappendToResultStream(text: string): void\n\n/**\n * Stream to the result stream\n * @param stream stream to stream to the result stream\n */\nasync streamResult(stream: AsyncIterable<string>): Promise<void>\n\n/**\n * Run a flow synchronously by its path and wait for the result\n * @param path - Flow path in Windmill\n * @param args - Arguments to pass to the flow\n * @param verbose - Enable verbose logging\n * @returns Flow execution result\n */\nasync runFlow(path: string | null = null, args: Record<string, any> | null = null, verbose: boolean = false): Promise<any>\n\n/**\n * Wait for a job to complete and return its result\n * @param jobId - ID of the job to wait for\n * @param verbose - Enable verbose logging\n * @returns Job result when completed\n */\nasync waitJob(jobId: string, verbose: boolean = false): Promise<any>\n\n/**\n * Get the result of a completed job\n * @param jobId - ID of the completed job\n * @returns Job result\n */\nasync getResult(jobId: string): Promise<any>\n\n/**\n * Get the result of a job if completed, or its current status\n * @param jobId - ID of the job\n * @returns Object with started, completed, success, and result properties\n */\nasync getResultMaybe(jobId: string): Promise<any>\n\n/**\n * Wrap a function to execute as a Windmill task within a flow context\n * @param f - Function to wrap as a task\n * @returns Async wrapper function that executes as a Windmill job\n */\ntask<P, T>(f: (_: P) => T): (_: P) => Promise<T>\n\n/**\n * @deprecated Use runScriptByPathAsync or runScriptByHashAsync instead\n */\nasync runScriptAsync(path: string | null, hash_: string | null, args: Record<string, any> | null, scheduledInSeconds: number | null = null): Promise<string>\n\n/**\n * Run a script asynchronously by its path\n * @param path - Script path in Windmill\n * @param args - Arguments to pass to the script\n * @param scheduledInSeconds - Schedule execution for a future time (in seconds)\n * @returns Job ID of the created job\n */\nasync runScriptByPathAsync(path: string, args: Record<string, any> | null = null, scheduledInSeconds: number | null = null): Promise<string>\n\n/**\n * Run a script asynchronously by its hash\n * @param hash_ - Script hash in Windmill\n * @param args - Arguments to pass to the script\n * @param scheduledInSeconds - Schedule execution for a future time (in seconds)\n * @returns Job ID of the created job\n */\nasync runScriptByHashAsync(hash_: string, args: Record<string, any> | null = null, scheduledInSeconds: number | null = null): Promise<string>\n\n/**\n * Run a flow asynchronously by its path\n * @param path - Flow path in Windmill\n * @param args - Arguments to pass to the flow\n * @param scheduledInSeconds - Schedule execution for a future time (in seconds)\n * @param doNotTrackInParent - If false, tracks state in parent job (only use when fully awaiting the job)\n * @returns Job ID of the created job\n */\nasync runFlowAsync(path: string | null, args: Record<string, any> | null, scheduledInSeconds: number | null = null, // can only be set to false if this the job will be fully await and not concurrent with any other job // as otherwise the child flow and its own child will store their state in the parent job which will // lead to incorrectness and failures doNotTrackInParent: boolean = true): Promise<string>\n\n/**\n * Resolve a resource value in case the default value was picked because the input payload was undefined\n * @param obj resource value or path of the resource under the format `$res:path`\n * @returns resource value\n */\nasync resolveDefaultResource(obj: any): Promise<any>\n\n/**\n * Get the state file path from environment variables\n * @returns State path string\n */\ngetStatePath(): string\n\n/**\n * Set a resource value by path\n * @param path path of the resource to set, default to state path\n * @param value new value of the resource to set\n * @param initializeToTypeIfNotExist if the resource does not exist, initialize it with this type\n */\nasync setResource(value: any, path?: string, initializeToTypeIfNotExist?: string): Promise<void>\n\n/**\n * Set the state\n * @param state state to set\n * @deprecated use setState instead\n */\nasync setInternalState(state: any): Promise<void>\n\n/**\n * Set the state\n * @param state state to set\n */\nasync setState(state: any): Promise<void>\n\n/**\n * Set the progress\n * Progress cannot go back and limited to 0% to 99% range\n * @param percent Progress to set in %\n * @param jobId? Job to set progress for\n */\nasync setProgress(percent: number, jobId?: any): Promise<void>\n\n/**\n * Get the progress\n * @param jobId? Job to get progress from\n * @returns Optional clamped between 0 and 100 progress value\n */\nasync getProgress(jobId?: any): Promise<number | null>\n\n/**\n * Set a flow user state\n * @param key key of the state\n * @param value value of the state\n */\nasync setFlowUserState(key: string, value: any, errorIfNotPossible?: boolean): Promise<void>\n\n/**\n * Get a flow user state\n * @param path path of the variable\n */\nasync getFlowUserState(key: string, errorIfNotPossible?: boolean): Promise<any>\n\n/**\n * Get the internal state\n * @deprecated use getState instead\n */\nasync getInternalState(): Promise<any>\n\n/**\n * Get the state shared across executions\n */\nasync getState(): Promise<any>\n\n/**\n * Get a variable by path\n * @param path path of the variable\n * @returns variable value\n */\nasync getVariable(path: string): Promise<string>\n\n/**\n * Set a variable by path, create if not exist\n * @param path path of the variable\n * @param value value of the variable\n * @param isSecretIfNotExist if the variable does not exist, create it as secret or not (default: false)\n * @param descriptionIfNotExist if the variable does not exist, create it with this description (default: \"\")\n */\nasync setVariable(path: string, value: string, isSecretIfNotExist?: boolean, descriptionIfNotExist?: string): Promise<void>\n\n/**\n * Build a PostgreSQL connection URL from a database resource\n * @param path - Path to the database resource\n * @returns PostgreSQL connection URL string\n */\nasync databaseUrlFromResource(path: string): Promise<string>\n\n/**\n * Get S3 client settings from a resource or workspace default\n * @param s3_resource_path - Path to S3 resource (uses workspace default if undefined)\n * @returns S3 client configuration settings\n */\nasync denoS3LightClientSettings(s3_resource_path: string | undefined): Promise<DenoS3LightClientSettings>\n\n/**\n * Load the content of a file stored in S3. If the s3ResourcePath is undefined, it will default to the workspace S3 resource.\n * \n * ```typescript\n * let fileContent = await wmill.loadS3FileContent(inputFile)\n * // if the file is a raw text file, it can be decoded and printed directly:\n * const text = new TextDecoder().decode(fileContentStream)\n * console.log(text);\n * ```\n */\nasync loadS3File(s3object: S3Object, s3ResourcePath: string | undefined = undefined): Promise<Uint8Array | undefined>\n\n/**\n * Load the content of a file stored in S3 as a stream. If the s3ResourcePath is undefined, it will default to the workspace S3 resource.\n * \n * ```typescript\n * let fileContentBlob = await wmill.loadS3FileStream(inputFile)\n * // if the content is plain text, the blob can be read directly:\n * console.log(await fileContentBlob.text());\n * ```\n */\nasync loadS3FileStream(s3object: S3Object, s3ResourcePath: string | undefined = undefined): Promise<Blob | undefined>\n\n/**\n * Persist a file to the S3 bucket. If the s3ResourcePath is undefined, it will default to the workspace S3 resource.\n * \n * ```typescript\n * const s3object = await writeS3File(s3Object, \"Hello Windmill!\")\n * const fileContentAsUtf8Str = (await s3object.toArray()).toString('utf-8')\n * console.log(fileContentAsUtf8Str)\n * ```\n */\nasync writeS3File(s3object: S3Object | undefined, fileContent: string | Blob, s3ResourcePath: string | undefined = undefined, contentType: string | undefined = undefined, contentDisposition: string | undefined = undefined): Promise<S3Object>\n\n/**\n * Sign S3 objects to be used by anonymous users in public apps\n * @param s3objects s3 objects to sign\n * @returns signed s3 objects\n */\nasync signS3Objects(s3objects: S3Object[]): Promise<S3Object[]>\n\n/**\n * Sign S3 object to be used by anonymous users in public apps\n * @param s3object s3 object to sign\n * @returns signed s3 object\n */\nasync signS3Object(s3object: S3Object): Promise<S3Object>\n\n/**\n * Generate a presigned public URL for an array of S3 objects.\n * If an S3 object is not signed yet, it will be signed first.\n * @param s3Objects s3 objects to sign\n * @returns list of signed public URLs\n */\nasync getPresignedS3PublicUrls(s3Objects: S3Object[], { baseUrl }: { baseUrl?: string } = {}): Promise<string[]>\n\n/**\n * Generate a presigned public URL for an S3 object. If the S3 object is not signed yet, it will be signed first.\n * @param s3Object s3 object to sign\n * @returns signed public URL\n */\nasync getPresignedS3PublicUrl(s3Objects: S3Object, { baseUrl }: { baseUrl?: string } = {}): Promise<string>\n\n/**\n * Get URLs needed for resuming a flow after this step\n * @param approver approver name\n * @returns approval page UI URL, resume and cancel API URLs for resuming the flow\n */\nasync getResumeUrls(approver?: string): Promise<{\n approvalPage: string;\n resume: string;\n cancel: string;\n}>\n\n/**\n * @deprecated use getResumeUrls instead\n */\ngetResumeEndpoints(approver?: string): Promise<{\n approvalPage: string;\n resume: string;\n cancel: string;\n}>\n\n/**\n * Get an OIDC jwt token for auth to external services (e.g: Vault, AWS) (ee only)\n * @param audience audience of the token\n * @param expiresIn Optional number of seconds until the token expires\n * @returns jwt token\n */\nasync getIdToken(audience: string, expiresIn?: number): Promise<string>\n\n/**\n * Convert a base64-encoded string to Uint8Array\n * @param data - Base64-encoded string\n * @returns Decoded Uint8Array\n */\nbase64ToUint8Array(data: string): Uint8Array\n\n/**\n * Convert a Uint8Array to base64-encoded string\n * @param arrayBuffer - Uint8Array to encode\n * @returns Base64-encoded string\n */\nuint8ArrayToBase64(arrayBuffer: Uint8Array): string\n\n/**\n * Get email from workspace username\n * This method is particularly useful for apps that require the email address of the viewer.\n * Indeed, in the viewer context, WM_USERNAME is set to the username of the viewer but WM_EMAIL is set to the email of the creator of the app.\n * @param username\n * @returns email address\n */\nasync usernameToEmail(username: string): Promise<string>\n\n/**\n * Sends an interactive approval request via Slack, allowing optional customization of the message, approver, and form fields.\n * \n * **[Enterprise Edition Only]** To include form fields in the Slack approval request, go to **Advanced -> Suspend -> Form**\n * and define a form. Learn more at [Windmill Documentation](https://www.windmill.dev/docs/flows/flow_approval#form).\n * \n * @param {Object} options - The configuration options for the Slack approval request.\n * @param {string} options.slackResourcePath - The path to the Slack resource in Windmill.\n * @param {string} options.channelId - The Slack channel ID where the approval request will be sent.\n * @param {string} [options.message] - Optional custom message to include in the Slack approval request.\n * @param {string} [options.approver] - Optional user ID or name of the approver for the request.\n * @param {DefaultArgs} [options.defaultArgsJson] - Optional object defining or overriding the default arguments to a form field.\n * @param {Enums} [options.dynamicEnumsJson] - Optional object overriding the enum default values of an enum form field.\n * \n * @returns {Promise<void>} Resolves when the Slack approval request is successfully sent.\n * \n * @throws {Error} If the function is not called within a flow or flow preview.\n * @throws {Error} If the `JobService.getSlackApprovalPayload` call fails.\n * \n * **Usage Example:**\n * ```typescript\n * await requestInteractiveSlackApproval({\n * slackResourcePath: \"/u/alex/my_slack_resource\",\n * channelId: \"admins-slack-channel\",\n * message: \"Please approve this request\",\n * approver: \"approver123\",\n * defaultArgsJson: { key1: \"value1\", key2: 42 },\n * dynamicEnumsJson: { foo: [\"choice1\", \"choice2\"], bar: [\"optionA\", \"optionB\"] },\n * });\n * ```\n * \n * **Note:** This function requires execution within a Windmill flow or flow preview.\n */\nasync requestInteractiveSlackApproval({ slackResourcePath, channelId, message, approver, defaultArgsJson, dynamicEnumsJson, }: SlackApprovalOptions): Promise<void>\n\n/**\n * Sends an interactive approval request via Teams, allowing optional customization of the message, approver, and form fields.\n * \n * **[Enterprise Edition Only]** To include form fields in the Teams approval request, go to **Advanced -> Suspend -> Form**\n * and define a form. Learn more at [Windmill Documentation](https://www.windmill.dev/docs/flows/flow_approval#form).\n * \n * @param {Object} options - The configuration options for the Teams approval request.\n * @param {string} options.teamName - The Teams team name where the approval request will be sent.\n * @param {string} options.channelName - The Teams channel name where the approval request will be sent.\n * @param {string} [options.message] - Optional custom message to include in the Teams approval request.\n * @param {string} [options.approver] - Optional user ID or name of the approver for the request.\n * @param {DefaultArgs} [options.defaultArgsJson] - Optional object defining or overriding the default arguments to a form field.\n * @param {Enums} [options.dynamicEnumsJson] - Optional object overriding the enum default values of an enum form field.\n * \n * @returns {Promise<void>} Resolves when the Teams approval request is successfully sent.\n * \n * @throws {Error} If the function is not called within a flow or flow preview.\n * @throws {Error} If the `JobService.getTeamsApprovalPayload` call fails.\n * \n * **Usage Example:**\n * ```typescript\n * await requestInteractiveTeamsApproval({\n * teamName: \"admins-teams\",\n * channelName: \"admins-teams-channel\",\n * message: \"Please approve this request\",\n * approver: \"approver123\",\n * defaultArgsJson: { key1: \"value1\", key2: 42 },\n * dynamicEnumsJson: { foo: [\"choice1\", \"choice2\"], bar: [\"optionA\", \"optionB\"] },\n * });\n * ```\n * \n * **Note:** This function requires execution within a Windmill flow or flow preview.\n */\nasync requestInteractiveTeamsApproval({ teamName, channelName, message, approver, defaultArgsJson, dynamicEnumsJson, }: TeamsApprovalOptions): Promise<void>\n\n/**\n * Parse an S3 object from URI string or record format\n * @param s3Object - S3 object as URI string (s3://storage/key) or record\n * @returns S3 object record with storage and s3 key\n */\nparseS3Object(s3Object: S3Object): S3ObjectRecord\n\n/**\n * Create a SQL template function for PostgreSQL/datatable queries\n * @param name - Database/datatable name (default: \"main\")\n * @returns SQL template function for building parameterized queries\n * @example\n * let sql = wmill.datatable()\n * let name = 'Robin'\n * let age = 21\n * await sql`\n * SELECT * FROM friends\n * WHERE name = ${name} AND age = ${age}::int\n * `.fetch()\n */\ndatatable(name: string = \"main\"): DatatableSqlTemplateFunction\n\n/**\n * Create a SQL template function for DuckDB/ducklake queries\n * @param name - DuckDB database name (default: \"main\")\n * @returns SQL template function for building parameterized queries\n * @example\n * let sql = wmill.ducklake()\n * let name = 'Robin'\n * let age = 21\n * await sql`\n * SELECT * FROM friends\n * WHERE name = ${name} AND age = ${age}\n * `.fetch()\n */\nducklake(name: string = \"main\"): SqlTemplateFunction\n\nasync polarsConnectionSettings(s3_resource_path: string | undefined): Promise<any>\n\nasync duckdbConnectionSettings(s3_resource_path: string | undefined): Promise<any>\n\n\n# Python SDK (wmill)\n\nImport: import wmill\n\ndef get_mocked_api() -> Optional[dict]\n\n# Get the HTTP client instance.\n# \n# Returns:\n# Configured httpx.Client for API requests\ndef get_client() -> httpx.Client\n\n# Make an HTTP GET request to the Windmill API.\n# \n# Args:\n# endpoint: API endpoint path\n# raise_for_status: Whether to raise an exception on HTTP errors\n# **kwargs: Additional arguments passed to httpx.get\n# \n# Returns:\n# HTTP response object\ndef get(endpoint, raise_for_status = True, **kwargs) -> httpx.Response\n\n# Make an HTTP POST request to the Windmill API.\n# \n# Args:\n# endpoint: API endpoint path\n# raise_for_status: Whether to raise an exception on HTTP errors\n# **kwargs: Additional arguments passed to httpx.post\n# \n# Returns:\n# HTTP response object\ndef post(endpoint, raise_for_status = True, **kwargs) -> httpx.Response\n\n# Create a new authentication token.\n# \n# Args:\n# duration: Token validity duration (default: 1 day)\n# \n# Returns:\n# New authentication token string\ndef create_token(duration = dt.timedelta(days=1)) -> str\n\n# Create a script job and return its job id.\n# \n# .. deprecated:: Use run_script_by_path_async or run_script_by_hash_async instead.\ndef run_script_async(path: str = None, hash_: str = None, args: dict = None, scheduled_in_secs: int = None) -> str\n\n# Create a script job by path and return its job id.\ndef run_script_by_path_async(path: str, args: dict = None, scheduled_in_secs: int = None) -> str\n\n# Create a script job by hash and return its job id.\ndef run_script_by_hash_async(hash_: str, args: dict = None, scheduled_in_secs: int = None) -> str\n\n# Create a flow job and return its job id.\ndef run_flow_async(path: str, args: dict = None, scheduled_in_secs: int = None, do_not_track_in_parent: bool = True) -> str\n\n# Run script synchronously and return its result.\n# \n# .. deprecated:: Use run_script_by_path or run_script_by_hash instead.\ndef run_script(path: str = None, hash_: str = None, args: dict = None, timeout: dt.timedelta | int | float | None = None, verbose: bool = False, cleanup: bool = True, assert_result_is_not_none: bool = False) -> Any\n\n# Run script by path synchronously and return its result.\ndef run_script_by_path(path: str, args: dict = None, timeout: dt.timedelta | int | float | None = None, verbose: bool = False, cleanup: bool = True, assert_result_is_not_none: bool = False) -> Any\n\n# Run script by hash synchronously and return its result.\ndef run_script_by_hash(hash_: str, args: dict = None, timeout: dt.timedelta | int | float | None = None, verbose: bool = False, cleanup: bool = True, assert_result_is_not_none: bool = False) -> Any\n\n# Run a script on the current worker without creating a job\ndef run_inline_script_preview(content: str, language: str, args: dict = None) -> Any\n\n# Wait for a job to complete and return its result.\n# \n# Args:\n# job_id: ID of the job to wait for\n# timeout: Maximum time to wait (seconds or timedelta)\n# verbose: Enable verbose logging\n# cleanup: Register cleanup handler to cancel job on exit\n# assert_result_is_not_none: Raise exception if result is None\n# \n# Returns:\n# Job result when completed\n# \n# Raises:\n# TimeoutError: If timeout is reached\n# Exception: If job fails\ndef wait_job(job_id, timeout: dt.timedelta | int | float | None = None, verbose: bool = False, cleanup: bool = True, assert_result_is_not_none: bool = False)\n\n# Cancel a specific job by ID.\n# \n# Args:\n# job_id: UUID of the job to cancel\n# reason: Optional reason for cancellation\n# \n# Returns:\n# Response message from the cancel endpoint\ndef cancel_job(job_id: str, reason: str = None) -> str\n\n# Cancel currently running executions of the same script.\ndef cancel_running() -> dict\n\n# Get job details by ID.\n# \n# Args:\n# job_id: UUID of the job\n# \n# Returns:\n# Job details dictionary\ndef get_job(job_id: str) -> dict\n\n# Get the root job ID for a flow hierarchy.\n# \n# Args:\n# job_id: Job ID (defaults to current WM_JOB_ID)\n# \n# Returns:\n# Root job ID\ndef get_root_job_id(job_id: str | None = None) -> dict\n\n# Get an OIDC JWT token for authentication to external services.\n# \n# Args:\n# audience: Token audience (e.g., \"vault\", \"aws\")\n# expires_in: Optional expiration time in seconds\n# \n# Returns:\n# JWT token string\ndef get_id_token(audience: str, expires_in: int | None = None) -> str\n\n# Get the status of a job.\n# \n# Args:\n# job_id: UUID of the job\n# \n# Returns:\n# Job status: \"RUNNING\", \"WAITING\", or \"COMPLETED\"\ndef get_job_status(job_id: str) -> JobStatus\n\n# Get the result of a completed job.\n# \n# Args:\n# job_id: UUID of the completed job\n# assert_result_is_not_none: Raise exception if result is None\n# \n# Returns:\n# Job result\ndef get_result(job_id: str, assert_result_is_not_none: bool = True) -> Any\n\n# Get a variable value by path.\n# \n# Args:\n# path: Variable path in Windmill\n# \n# Returns:\n# Variable value as string\ndef get_variable(path: str) -> str\n\n# Set a variable value by path, creating it if it doesn't exist.\n# \n# Args:\n# path: Variable path in Windmill\n# value: Variable value to set\n# is_secret: Whether the variable should be secret (default: False)\ndef set_variable(path: str, value: str, is_secret: bool = False) -> None\n\n# Get a resource value by path.\n# \n# Args:\n# path: Resource path in Windmill\n# none_if_undefined: Return None instead of raising if not found\n# \n# Returns:\n# Resource value dictionary or None\ndef get_resource(path: str, none_if_undefined: bool = False) -> dict | None\n\n# Set a resource value by path, creating it if it doesn't exist.\n# \n# Args:\n# value: Resource value to set\n# path: Resource path in Windmill\n# resource_type: Resource type for creation\ndef set_resource(value: Any, path: str, resource_type: str)\n\n# List resources from Windmill workspace.\n# \n# Args:\n# resource_type: Optional resource type to filter by (e.g., \"postgresql\", \"mysql\", \"s3\")\n# page: Optional page number for pagination\n# per_page: Optional number of results per page\n# \n# Returns:\n# List of resource dictionaries\ndef list_resources(resource_type: str = None, page: int = None, per_page: int = None) -> list[dict]\n\n# Set the workflow state.\n# \n# Args:\n# value: State value to set\ndef set_state(value: Any)\n\n# Set job progress percentage (0-99).\n# \n# Args:\n# value: Progress percentage\n# job_id: Job ID (defaults to current WM_JOB_ID)\ndef set_progress(value: int, job_id: Optional[str] = None)\n\n# Get job progress percentage.\n# \n# Args:\n# job_id: Job ID (defaults to current WM_JOB_ID)\n# \n# Returns:\n# Progress value (0-100) or None if not set\ndef get_progress(job_id: Optional[str] = None) -> Any\n\n# Set the user state of a flow at a given key\ndef set_flow_user_state(key: str, value: Any) -> None\n\n# Get the user state of a flow at a given key\ndef get_flow_user_state(key: str) -> Any\n\n# Get the Windmill server version.\n# \n# Returns:\n# Version string\ndef version()\n\n# Convenient helpers that takes an S3 resource as input and returns the settings necessary to\n# initiate an S3 connection from DuckDB\ndef get_duckdb_connection_settings(s3_resource_path: str = '') -> DuckDbConnectionSettings | None\n\n# Convenient helpers that takes an S3 resource as input and returns the settings necessary to\n# initiate an S3 connection from Polars\ndef get_polars_connection_settings(s3_resource_path: str = '') -> PolarsConnectionSettings\n\n# Convenient helpers that takes an S3 resource as input and returns the settings necessary to\n# initiate an S3 connection using boto3\ndef get_boto3_connection_settings(s3_resource_path: str = '') -> Boto3ConnectionSettings\n\n# Load a file from the workspace s3 bucket and returns its content as bytes.\n# \n# '''python\n# from wmill import S3Object\n# \n# s3_obj = S3Object(s3=\"/path/to/my_file.txt\")\n# my_obj_content = client.load_s3_file(s3_obj)\n# file_content = my_obj_content.decode(\"utf-8\")\n# '''\ndef load_s3_file(s3object: S3Object | str, s3_resource_path: str | None) -> bytes\n\n# Load a file from the workspace s3 bucket and returns the bytes stream.\n# \n# '''python\n# from wmill import S3Object\n# \n# s3_obj = S3Object(s3=\"/path/to/my_file.txt\")\n# with wmill.load_s3_file_reader(s3object, s3_resource_path) as file_reader:\n# print(file_reader.read())\n# '''\ndef load_s3_file_reader(s3object: S3Object | str, s3_resource_path: str | None) -> BufferedReader\n\n# Write a file to the workspace S3 bucket\n# \n# '''python\n# from wmill import S3Object\n# \n# s3_obj = S3Object(s3=\"/path/to/my_file.txt\")\n# \n# # for an in memory bytes array:\n# file_content = b'Hello Windmill!'\n# client.write_s3_file(s3_obj, file_content)\n# \n# # for a file:\n# with open(\"my_file.txt\", \"rb\") as my_file:\n# client.write_s3_file(s3_obj, my_file)\n# '''\ndef write_s3_file(s3object: S3Object | str | None, file_content: BufferedReader | bytes, s3_resource_path: str | None, content_type: str | None = None, content_disposition: str | None = None) -> S3Object\n\n# Sign S3 objects for use by anonymous users in public apps.\n# \n# Args:\n# s3_objects: List of S3 objects to sign\n# \n# Returns:\n# List of signed S3 objects\ndef sign_s3_objects(s3_objects: list[S3Object | str]) -> list[S3Object]\n\n# Sign a single S3 object for use by anonymous users in public apps.\n# \n# Args:\n# s3_object: S3 object to sign\n# \n# Returns:\n# Signed S3 object\ndef sign_s3_object(s3_object: S3Object | str) -> S3Object\n\n# Generate presigned public URLs for an array of S3 objects.\n# If an S3 object is not signed yet, it will be signed first.\n# \n# Args:\n# s3_objects: List of S3 objects to sign\n# base_url: Optional base URL for the presigned URLs (defaults to WM_BASE_URL)\n# \n# Returns:\n# List of signed public URLs\n# \n# Example:\n# >>> s3_objs = [S3Object(s3=\"/path/to/file1.txt\"), S3Object(s3=\"/path/to/file2.txt\")]\n# >>> urls = client.get_presigned_s3_public_urls(s3_objs)\ndef get_presigned_s3_public_urls(s3_objects: list[S3Object | str], base_url: str | None = None) -> list[str]\n\n# Generate a presigned public URL for an S3 object.\n# If the S3 object is not signed yet, it will be signed first.\n# \n# Args:\n# s3_object: S3 object to sign\n# base_url: Optional base URL for the presigned URL (defaults to WM_BASE_URL)\n# \n# Returns:\n# Signed public URL\n# \n# Example:\n# >>> s3_obj = S3Object(s3=\"/path/to/file.txt\")\n# >>> url = client.get_presigned_s3_public_url(s3_obj)\ndef get_presigned_s3_public_url(s3_object: S3Object | str, base_url: str | None = None) -> str\n\n# Get the current user information.\n# \n# Returns:\n# User details dictionary\ndef whoami() -> dict\n\n# Get the current user information (alias for whoami).\n# \n# Returns:\n# User details dictionary\ndef user() -> dict\n\n# Get the state resource path from environment.\n# \n# Returns:\n# State path string\ndef state_path() -> str\n\n# Get the workflow state.\n# \n# Returns:\n# State value or None if not set\ndef state() -> Any\n\n# Set the state in the shared folder using pickle\ndef set_shared_state_pickle(value: Any, path: str = 'state.pickle') -> None\n\n# Get the state in the shared folder using pickle\ndef get_shared_state_pickle(path: str = 'state.pickle') -> Any\n\n# Set the state in the shared folder using pickle\ndef set_shared_state(value: Any, path: str = 'state.json') -> None\n\n# Get the state in the shared folder using pickle\ndef get_shared_state(path: str = 'state.json') -> None\n\n# Get URLs needed for resuming a flow after suspension.\n# \n# Args:\n# approver: Optional approver name\n# \n# Returns:\n# Dictionary with approvalPage, resume, and cancel URLs\ndef get_resume_urls(approver: str = None) -> dict\n\n# Sends an interactive approval request via Slack, allowing optional customization of the message, approver, and form fields.\n# \n# **[Enterprise Edition Only]** To include form fields in the Slack approval request, use the \"Advanced -> Suspend -> Form\" functionality.\n# Learn more at: https://www.windmill.dev/docs/flows/flow_approval#form\n# \n# :param slack_resource_path: The path to the Slack resource in Windmill.\n# :type slack_resource_path: str\n# :param channel_id: The Slack channel ID where the approval request will be sent.\n# :type channel_id: str\n# :param message: Optional custom message to include in the Slack approval request.\n# :type message: str, optional\n# :param approver: Optional user ID or name of the approver for the request.\n# :type approver: str, optional\n# :param default_args_json: Optional dictionary defining or overriding the default arguments for form fields.\n# :type default_args_json: dict, optional\n# :param dynamic_enums_json: Optional dictionary overriding the enum default values of enum form fields.\n# :type dynamic_enums_json: dict, optional\n# \n# :raises Exception: If the function is not called within a flow or flow preview.\n# :raises Exception: If the required flow job or flow step environment variables are not set.\n# \n# :return: None\n# \n# **Usage Example:**\n# >>> client.request_interactive_slack_approval(\n# ... slack_resource_path=\"/u/alex/my_slack_resource\",\n# ... channel_id=\"admins-slack-channel\",\n# ... message=\"Please approve this request\",\n# ... approver=\"approver123\",\n# ... default_args_json={\"key1\": \"value1\", \"key2\": 42},\n# ... dynamic_enums_json={\"foo\": [\"choice1\", \"choice2\"], \"bar\": [\"optionA\", \"optionB\"]},\n# ... )\n# \n# **Notes:**\n# - This function must be executed within a Windmill flow or flow preview.\n# - The function checks for required environment variables (`WM_FLOW_JOB_ID`, `WM_FLOW_STEP_ID`) to ensure it is run in the appropriate context.\ndef request_interactive_slack_approval(slack_resource_path: str, channel_id: str, message: str = None, approver: str = None, default_args_json: dict = None, dynamic_enums_json: dict = None) -> None\n\n# Get email from workspace username\n# This method is particularly useful for apps that require the email address of the viewer.\n# Indeed, in the viewer context WM_USERNAME is set to the username of the viewer but WM_EMAIL is set to the email of the creator of the app.\ndef username_to_email(username: str) -> str\n\n# Send a message to a Microsoft Teams conversation with conversation_id, where success is used to style the message\ndef send_teams_message(conversation_id: str, text: str, success: bool = True, card_block: dict = None)\n\n# Get a DataTable client for SQL queries.\n# \n# Args:\n# name: Database name (default: \"main\")\n# \n# Returns:\n# DataTableClient instance\ndef datatable(name: str = 'main')\n\n# Get a DuckLake client for DuckDB queries.\n# \n# Args:\n# name: Database name (default: \"main\")\n# \n# Returns:\n# DucklakeClient instance\ndef ducklake(name: str = 'main')\n\ndef init_global_client(f)\n\ndef deprecate(in_favor_of: str)\n\n# Get the current workspace ID.\n# \n# Returns:\n# Workspace ID string\ndef get_workspace() -> str\n\ndef get_version() -> str\n\n# Run a script synchronously by hash and return its result.\n# \n# Args:\n# hash: Script hash\n# args: Script arguments\n# verbose: Enable verbose logging\n# assert_result_is_not_none: Raise exception if result is None\n# cleanup: Register cleanup handler to cancel job on exit\n# timeout: Maximum time to wait\n# \n# Returns:\n# Script result\ndef run_script_sync(hash: str, args: Dict[str, Any] = None, verbose: bool = False, assert_result_is_not_none: bool = True, cleanup: bool = True, timeout: dt.timedelta = None) -> Any\n\n# Run a script synchronously by path and return its result.\n# \n# Args:\n# path: Script path\n# args: Script arguments\n# verbose: Enable verbose logging\n# assert_result_is_not_none: Raise exception if result is None\n# cleanup: Register cleanup handler to cancel job on exit\n# timeout: Maximum time to wait\n# \n# Returns:\n# Script result\ndef run_script_by_path_sync(path: str, args: Dict[str, Any] = None, verbose: bool = False, assert_result_is_not_none: bool = True, cleanup: bool = True, timeout: dt.timedelta = None) -> Any\n\n# Convenient helpers that takes an S3 resource as input and returns the settings necessary to\n# initiate an S3 connection from DuckDB\ndef duckdb_connection_settings(s3_resource_path: str = '') -> DuckDbConnectionSettings\n\n# Convenient helpers that takes an S3 resource as input and returns the settings necessary to\n# initiate an S3 connection from Polars\ndef polars_connection_settings(s3_resource_path: str = '') -> PolarsConnectionSettings\n\n# Convenient helpers that takes an S3 resource as input and returns the settings necessary to\n# initiate an S3 connection using boto3\ndef boto3_connection_settings(s3_resource_path: str = '') -> Boto3ConnectionSettings\n\n# Get the state\ndef get_state() -> Any\n\n# Get the state resource path from environment.\n# \n# Returns:\n# State path string\ndef get_state_path() -> str\n\n# Decorator to mark a function as a workflow task.\n# \n# When executed inside a Windmill job, the decorated function runs as a\n# separate workflow step. Outside Windmill, it executes normally.\n# \n# Args:\n# tag: Optional worker tag for execution\n# \n# Returns:\n# Decorated function\ndef task(*args, **kwargs)\n\n# Parse resource syntax from string.\ndef parse_resource_syntax(s: str) -> Optional[str]\n\n# Parse S3 object from string or S3Object format.\ndef parse_s3_object(s3_object: S3Object | str) -> S3Object\n\n# Parse variable syntax from string.\ndef parse_variable_syntax(s: str) -> Optional[str]\n\n# Append a text to the result stream.\n# \n# Args:\n# text: text to append to the result stream\ndef append_to_result_stream(text: str) -> None\n\n# Stream to the result stream.\n# \n# Args:\n# stream: stream to stream to the result stream\ndef stream_result(stream) -> None\n\n# Execute a SQL query against the DataTable.\n# \n# Args:\n# sql: SQL query string with $1, $2, etc. placeholders\n# *args: Positional arguments to bind to query placeholders\n# \n# Returns:\n# SqlQuery instance for fetching results\ndef query(sql: str, *args)\n\n# Execute query and fetch results.\n# \n# Args:\n# result_collection: Optional result collection mode\n# \n# Returns:\n# Query results\ndef fetch(result_collection: str | None = None)\n\n# Execute query and fetch first row of results.\n# \n# Returns:\n# First row of query results\ndef fetch_one()\n\n# DuckDB executor requires explicit argument types at declaration\n# These types exist in both DuckDB and Postgres\n# Check that the types exist if you plan to extend this function for other SQL engines.\ndef infer_sql_type(value) -> str\n\n\n";
2
2
  //# sourceMappingURL=script_guidance.d.ts.map
@@ -1 +1 @@
1
- {"version":3,"file":"script_guidance.d.ts","sourceRoot":"","sources":["../../../src/src/guidance/script_guidance.ts"],"names":[],"mappings":"AAUA,eAAO,MAAM,eAAe,mj5DAI3B,CAAC"}
1
+ {"version":3,"file":"script_guidance.d.ts","sourceRoot":"","sources":["../../../src/src/guidance/script_guidance.ts"],"names":[],"mappings":"AAUA,eAAO,MAAM,eAAe,4j5DAI3B,CAAC"}
@@ -22,7 +22,7 @@ import { pull as hubPull } from "./commands/hub/hub.js";
22
22
  import { pull, push } from "./commands/sync/sync.js";
23
23
  import { add as workspaceAdd } from "./commands/workspace/workspace.js";
24
24
  export { flow, app, script, workspace, resource, resourceType, user, variable, hub, folder, schedule, trigger, sync, gitsyncSettings, instance, dev, hubPull, pull, push, workspaceAdd, };
25
- export declare const VERSION = "1.595.0";
25
+ export declare const VERSION = "1.597.1";
26
26
  export declare const WM_FORK_PREFIX = "wm-fork";
27
27
  declare const command: Command<{
28
28
  workspace?: (import("../deps/jsr.io/@windmill-labs/cliffy-command/1.0.0-rc.5/mod.js").StringType & string) | undefined;
@@ -58,6 +58,18 @@ export type FlowValue = {
58
58
  * Expression to group debounced executions
59
59
  */
60
60
  debounce_key?: string;
61
+ /**
62
+ * Arguments to accumulate across debounced executions
63
+ */
64
+ debounce_args_to_accumulate?: Array<(string)>;
65
+ /**
66
+ * Maximum total time in seconds that a job can be debounced
67
+ */
68
+ max_total_debouncing_time?: number;
69
+ /**
70
+ * Maximum number of times a job can be debounced
71
+ */
72
+ max_total_debounces_amount?: number;
61
73
  /**
62
74
  * JavaScript expression to conditionally skip the entire flow
63
75
  */
@@ -534,7 +546,7 @@ export type AiAgent = {
534
546
  user_message: InputTransform;
535
547
  system_prompt?: InputTransform;
536
548
  streaming?: InputTransform;
537
- messages_context_length?: InputTransform;
549
+ memory?: InputTransform;
538
550
  output_schema?: InputTransform;
539
551
  user_images?: InputTransform;
540
552
  max_completion_tokens?: InputTransform;
@@ -571,6 +583,8 @@ export type AiAgent = {
571
583
  * Blacklist of tools to exclude from this MCP server
572
584
  */
573
585
  exclude_tools?: Array<(string)>;
586
+ } | {
587
+ tool_type: 'websearch';
574
588
  });
575
589
  }>;
576
590
  type: 'aiagent';
@@ -649,6 +663,8 @@ export type FlowStatusModule = {
649
663
  arguments?: {
650
664
  [key: string]: unknown;
651
665
  };
666
+ } | {
667
+ type: 'web_search';
652
668
  } | {
653
669
  type: 'message';
654
670
  })>;
@@ -909,6 +925,9 @@ export type Script = {
909
925
  concurrency_key?: string;
910
926
  debounce_key?: string;
911
927
  debounce_delay_s?: number;
928
+ debounce_args_to_accumulate?: Array<(string)>;
929
+ max_total_debouncing_time?: number;
930
+ max_total_debounces_amount?: number;
912
931
  cache_ttl?: number;
913
932
  dedicated_worker?: boolean;
914
933
  ws_error_handler_muted?: boolean;
@@ -953,6 +972,9 @@ export type NewScript = {
953
972
  concurrency_key?: string;
954
973
  debounce_key?: string;
955
974
  debounce_delay_s?: number;
975
+ debounce_args_to_accumulate?: Array<(string)>;
976
+ max_total_debouncing_time?: number;
977
+ max_total_debounces_amount?: number;
956
978
  visible_to_runner_only?: boolean;
957
979
  no_main_func?: boolean;
958
980
  codebase?: string;
@@ -7653,6 +7675,15 @@ export type GetCompletedJobResultMaybeResponse = ({
7653
7675
  success?: boolean;
7654
7676
  started?: boolean;
7655
7677
  });
7678
+ export type GetCompletedJobTimingData = {
7679
+ id: string;
7680
+ workspace: string;
7681
+ };
7682
+ export type GetCompletedJobTimingResponse = ({
7683
+ created_at: string;
7684
+ started_at?: string;
7685
+ duration_ms?: number;
7686
+ });
7656
7687
  export type DeleteCompletedJobData = {
7657
7688
  id: string;
7658
7689
  workspace: string;