@harperfast/agent 0.13.8-ink → 0.14.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (3) hide show
  1. package/README.md +31 -37
  2. package/dist/agent.js +214 -121
  3. package/package.json +1 -2
package/README.md CHANGED
@@ -13,22 +13,11 @@ AI to help you with Harper app creation and management.
13
13
 
14
14
  ## Getting Started
15
15
 
16
- When you first run `harper-agent`, it will prompt you for an API key if one is not found in your environment. It will then automatically save it to a `.env` file in your current working directory.
16
+ When you first run `harper-agent`, it will guide you through setting up your environment.
17
17
 
18
- If you prefer to set it manually, you can create a `.env` file:
18
+ ![What model provider would you like to use today? OpenAI, Anthropic, Google or Ollama](guidance/pick-a-provider.png)
19
19
 
20
- ```bash
21
- # For OpenAI (default)
22
- OPENAI_API_KEY=your_api_key_here
23
-
24
- # For Anthropic
25
- ANTHROPIC_API_KEY=your_api_key_here
26
-
27
- # For Google Gemini
28
- GOOGLE_GENERATIVE_AI_API_KEY=your_api_key_here
29
- ```
30
-
31
- (If you'd rather export these environment variables from within your .zshrc or equivalent file, you can do that instead.)
20
+ It will then automatically save it to either `~/.harper/harper-agent-env` or your local `.env` file in your current working directory.
32
21
 
33
22
  Now install harper-agent:
34
23
 
@@ -44,33 +33,44 @@ npx -y @harperfast/agent
44
33
 
45
34
  You're ready to go!
46
35
 
47
- ```bash
48
- > harper-agent
36
+ ![What would you like to create together?](guidance/what-would-you-like-to-create-together.png)
49
37
 
50
- Working directory: /Users/dawson/Code/softwork-beats
51
- Harper app detected: Yes
52
- Press Ctrl+C or hit enter twice to exit.
38
+ ## Usage
53
39
 
54
- Harper: What do you want to do together today?
40
+ Once installed or running, you can ask harper-agent to help you with tasks in your current directory, such as applying patches or managing your Harper application.
55
41
 
56
- >
57
- ```
42
+ Press `Ctrl+C` or type "exit" or hit enter twice to exit.
58
43
 
59
- ### Non-interactive: pipe an initial prompt
44
+ ### Non-interactive: pass an initial prompt
60
45
 
61
- You can pass an initial chat dump via stdin. This runs a one-shot interaction and exits after responding:
46
+ You can pass an initial `--prompt=`. Harper Agent will turn that into an actionable plan, and then it will iterate until it completes it.
62
47
 
63
48
  ```bash
64
- cat somePrompt.md | harper-agent
65
- # or
66
- harper-agent < somePrompt.md
49
+ harper-agent --prompt="Write a poem generation app, please"
67
50
  ```
68
51
 
69
52
  In this mode, the initial greeting question is suppressed, and the agent processes the provided prompt immediately.
70
53
 
54
+ ## Manual Environment Configuration
55
+
56
+ If you prefer to set up your environment manually, you can create a `.env` file:
57
+
58
+ ```bash
59
+ # For OpenAI (default)
60
+ OPENAI_API_KEY=your_api_key_here
61
+
62
+ # For Anthropic
63
+ ANTHROPIC_API_KEY=your_api_key_here
64
+
65
+ # For Google Gemini
66
+ GOOGLE_GENERATIVE_AI_API_KEY=your_api_key_here
67
+ ```
68
+
69
+ (If you'd rather export these environment variables from within your .zshrc or equivalent file, you can do that instead.)
70
+
71
71
  ## Model Selection
72
72
 
73
- By default, `harper-agent` uses OpenAI. You can switch to other models using the `--model` (or `-m`) flag:
73
+ When you first fire up Harper Agent, it will ask you what model you want to use. You can also control this with command line arguments:
74
74
 
75
75
  ```bash
76
76
  # Use Claude 3.5 Sonnet
@@ -83,11 +83,11 @@ harper-agent --model gemini-3-pro
83
83
  harper-agent --model gpt-5.2
84
84
  ```
85
85
 
86
- You can also set the default model via the `HARPER_AGENT_MODEL` environment variable.
86
+ Or you can set the default model via the `HARPER_AGENT_MODEL` environment variable in your `.env` file or in `~/.harper/harper-agent-env`
87
87
 
88
88
  ### Compaction Model
89
89
 
90
- By default, `harper-agent` uses `gpt-5-nano` for session memory compaction. You can switch this to another model using the `--compaction-model` (or `-c`) flag:
90
+ A smaller model will be used for compaction, depending on your chosen LLM provider. You can specify your model using the `--compaction-model` (or `-c`) flag:
91
91
 
92
92
  ```bash
93
93
  # Use a different compaction model
@@ -98,7 +98,7 @@ You can also set the default compaction model via the `HARPER_AGENT_COMPACTION_M
98
98
 
99
99
  ### Session Persistence
100
100
 
101
- By default, `harper-agent` uses an in-memory session that is lost when you exit. You can persist your chat session to a SQLite database on disk using the `--session` (or `-s`) flag:
101
+ Harper Agent will ask if it can persist your chat session to a JSON database on disk using the `--session` (or `-s`) flag:
102
102
 
103
103
  ```bash
104
104
  # Persist session to a file
@@ -176,9 +176,3 @@ npm link
176
176
  ```
177
177
 
178
178
  Now you can run `harper-agent` from any directory.
179
-
180
- ## Usage
181
-
182
- Once installed or running, you can ask harper-agent to help you with tasks in your current directory, such as applying patches or managing your Harper application.
183
-
184
- Press `Ctrl+C` or hit enter twice to exit.
package/dist/agent.js CHANGED
@@ -994,6 +994,25 @@ var WorkspaceEditor = class {
994
994
  return { status: "failed", output: `Error updating ${operation.path}: ${String(err)}` };
995
995
  }
996
996
  }
997
+ async overwriteFile(operation) {
998
+ try {
999
+ const targetPath = resolvePath(this.root(), operation.path);
1000
+ await mkdir(path4.dirname(targetPath), { recursive: true });
1001
+ const normalizedInput = normalizeDiff(operation.diff);
1002
+ const lines = normalizedInput.split(/\r?\n/);
1003
+ const hasDiffMarkers = lines.some((line) => line.startsWith("+") || line.startsWith("-"));
1004
+ let finalContent;
1005
+ if (hasDiffMarkers) {
1006
+ finalContent = lines.filter((line) => !line.startsWith("-")).map((line) => line.startsWith("+") || line.startsWith(" ") ? line.slice(1) : line).join("\n");
1007
+ } else {
1008
+ finalContent = normalizedInput;
1009
+ }
1010
+ await writeFile(targetPath, finalContent, "utf8");
1011
+ return { status: "completed", output: `Overwrote ${operation.path}` };
1012
+ } catch (err) {
1013
+ return { status: "failed", output: `Error overwriting ${operation.path}: ${String(err)}` };
1014
+ }
1015
+ }
997
1016
  async deleteFile(operation) {
998
1017
  try {
999
1018
  const targetPath = resolvePath(this.root(), operation.path);
@@ -1010,10 +1029,12 @@ var WorkspaceEditor = class {
1010
1029
 
1011
1030
  // tools/files/applyPatchTool.ts
1012
1031
  var ApplyPatchParameters = z11.object({
1013
- type: z11.enum(["create_file", "update_file", "delete_file"]).describe("The type of operation to perform."),
1032
+ type: z11.enum(["create_file", "update_file", "delete_file", "overwrite_file"]).describe(
1033
+ "The type of operation to perform."
1034
+ ),
1014
1035
  path: z11.string().describe("The path to the file to operate on."),
1015
1036
  diff: z11.string().optional().default("").describe(
1016
- 'The diff to apply. For create_file, every line must start with "+". For update_file, use a headerless unified diff format (start sections with "@@", and use "+", "-", or " " for lines). Do not include markers like "*** Begin Patch" or "*** Add File:".'
1037
+ 'The diff to apply. For create_file, every line must start with "+". For update_file, use a headerless unified diff format (start sections with "@@", and use "+", "-", or " " for lines). For overwrite_file, the diff is the raw content of the file. Do not include markers like "*** Begin Patch" or "*** Add File:".'
1017
1038
  )
1018
1039
  });
1019
1040
  function normalizedPath(p) {
@@ -1108,6 +1129,11 @@ async function execute11(operation) {
1108
1129
  return { status: "failed", output: "Error: diff is required for update_file" };
1109
1130
  }
1110
1131
  return await editor.updateFile(operation);
1132
+ case "overwrite_file":
1133
+ if (!operation.diff) {
1134
+ return { status: "failed", output: "Error: diff is required for overwrite_file" };
1135
+ }
1136
+ return await editor.overwriteFile(operation);
1111
1137
  case "delete_file":
1112
1138
  return await editor.deleteFile(operation);
1113
1139
  default:
@@ -2683,9 +2709,96 @@ function excludeFalsy(item) {
2683
2709
  return !!item;
2684
2710
  }
2685
2711
 
2712
+ // utils/models/estimateTokens.ts
2713
+ function estimateTokens(items) {
2714
+ let chars = 0;
2715
+ for (const it of items) {
2716
+ if (!it) {
2717
+ continue;
2718
+ }
2719
+ if (Array.isArray(it.content)) {
2720
+ for (const c of it.content) {
2721
+ if (!c) {
2722
+ continue;
2723
+ }
2724
+ if (typeof c.text === "string") {
2725
+ chars += c.text.length;
2726
+ } else if (typeof c.content === "string") {
2727
+ chars += c.content.length;
2728
+ } else if (typeof c === "string") {
2729
+ chars += c.length;
2730
+ }
2731
+ }
2732
+ }
2733
+ if (typeof it.content === "string") {
2734
+ chars += it.content.length;
2735
+ }
2736
+ if (typeof it.text === "string") {
2737
+ chars += it.text.length;
2738
+ }
2739
+ if (it.type === "function_call" && it.call) {
2740
+ chars += JSON.stringify(it.call).length;
2741
+ }
2742
+ if (it.type === "function_call_result" && it.result) {
2743
+ chars += JSON.stringify(it.result).length;
2744
+ }
2745
+ }
2746
+ return Math.ceil(chars / 4);
2747
+ }
2748
+
2686
2749
  // utils/sessions/compactConversation.ts
2687
2750
  import { Agent, run, system } from "@openai/agents";
2688
2751
 
2752
+ // utils/models/splitItemsIntelligently.ts
2753
+ function splitItemsIntelligently(items, targetRecentCount = 3) {
2754
+ let splitIndex = Math.max(0, items.length - targetRecentCount);
2755
+ while (splitIndex > 0 && splitIndex < items.length) {
2756
+ const itemAtSplit = items[splitIndex];
2757
+ if (itemAtSplit.type === "function_call_result") {
2758
+ splitIndex--;
2759
+ } else {
2760
+ break;
2761
+ }
2762
+ }
2763
+ return {
2764
+ itemsToCompact: items.slice(0, splitIndex),
2765
+ recentItems: items.slice(splitIndex)
2766
+ };
2767
+ }
2768
+
2769
+ // utils/sessions/modelContextLimits.ts
2770
+ function getModelContextLimit(modelName) {
2771
+ if (!modelName) {
2772
+ return DEFAULT_LIMIT;
2773
+ }
2774
+ const name = modelName.toLowerCase();
2775
+ if (name.startsWith("gpt-")) {
2776
+ if (name.startsWith("gpt-4")) {
2777
+ return 128e3;
2778
+ }
2779
+ return 2e5;
2780
+ }
2781
+ if (name.startsWith("claude-")) {
2782
+ if (name.startsWith("claude-3.7") || name.startsWith("claude-3.5") || name.startsWith("claude-3")) {
2783
+ return 2e5;
2784
+ }
2785
+ return 1e6;
2786
+ }
2787
+ if (name.startsWith("gemini-")) {
2788
+ return 1e6;
2789
+ }
2790
+ if (name.startsWith("ollama-")) {
2791
+ return 8e3;
2792
+ }
2793
+ return DEFAULT_LIMIT;
2794
+ }
2795
+ function getCompactionTriggerTokens(modelName, fraction = 0.5) {
2796
+ const limit = getModelContextLimit(modelName);
2797
+ const f = Math.min(Math.max(fraction, 0.5), 0.95);
2798
+ return Math.floor(limit * f);
2799
+ }
2800
+ var DEFAULT_LIMIT = 128e3;
2801
+
2689
2802
  // utils/sessions/modelSettings.ts
2690
2803
  var modelsRequiringMediumVerbosity = [
2691
2804
  "gpt-4o"
@@ -2706,17 +2819,9 @@ function getModelSettings(modelName) {
2706
2819
 
2707
2820
  // utils/sessions/compactConversation.ts
2708
2821
  async function compactConversation(items) {
2709
- let splitIndex = Math.max(0, items.length - 3);
2710
- while (splitIndex > 0 && splitIndex < items.length) {
2711
- const itemAtSplit = items[splitIndex];
2712
- if (itemAtSplit.type === "function_call_result") {
2713
- splitIndex--;
2714
- } else {
2715
- break;
2716
- }
2717
- }
2718
- const recentItems = items.slice(splitIndex);
2719
- const itemsToCompact = items.slice(0, splitIndex);
2822
+ const { itemsToCompact, recentItems } = splitItemsIntelligently(items);
2823
+ const contextLimit = getModelContextLimit(trackedState.compactionModel);
2824
+ const targetLimit = Math.floor(contextLimit * 0.9);
2720
2825
  let noticeContent = "... conversation history compacted ...";
2721
2826
  if (trackedState.compactionModel && itemsToCompact.length > 0) {
2722
2827
  try {
@@ -2726,15 +2831,38 @@ async function compactConversation(items) {
2726
2831
  modelSettings: getModelSettings(trackedState.compactionModel),
2727
2832
  instructions: "Compact the provided conversation history.\n- Focus on what is NOT completed and needs to be remembered for later.\n- Do NOT include file content or patches, it is available on the filesystem already. \n- Be concise."
2728
2833
  });
2729
- emitToListeners("SetCompacting", true);
2730
- const result = await run(
2731
- agent,
2732
- itemsToCompact
2733
- );
2734
- const summary = result.finalOutput;
2735
- if (summary && summary.trim().length > 0) {
2834
+ const summaries = [];
2835
+ let remainingItems = itemsToCompact;
2836
+ while (remainingItems.length > 0) {
2837
+ let currentBatch = [];
2838
+ let lastGoodBatch = [];
2839
+ let splitIdx = remainingItems.length;
2840
+ while (splitIdx > 0) {
2841
+ currentBatch = remainingItems.slice(0, splitIdx);
2842
+ if (estimateTokens(currentBatch) <= targetLimit) {
2843
+ lastGoodBatch = currentBatch;
2844
+ break;
2845
+ }
2846
+ splitIdx = Math.floor(splitIdx * 0.8);
2847
+ }
2848
+ if (lastGoodBatch.length === 0) {
2849
+ lastGoodBatch = [remainingItems[0]];
2850
+ splitIdx = 1;
2851
+ }
2852
+ emitToListeners("SetCompacting", true);
2853
+ const result = await run(
2854
+ agent,
2855
+ lastGoodBatch
2856
+ );
2857
+ const summary = result.finalOutput;
2858
+ if (summary && summary.trim().length > 0) {
2859
+ summaries.push(summary.trim());
2860
+ }
2861
+ remainingItems = remainingItems.slice(splitIdx);
2862
+ }
2863
+ if (summaries.length > 0) {
2736
2864
  noticeContent = `Key observations from earlier:
2737
- ${summary.trim()}`;
2865
+ ${summaries.join("\n\n")}`;
2738
2866
  }
2739
2867
  } catch (err) {
2740
2868
  const msg = String(err?.message || err || "");
@@ -2742,6 +2870,9 @@ ${summary.trim()}`;
2742
2870
  if (!isNoTrace) {
2743
2871
  console.warn("Compaction summarization failed:", msg);
2744
2872
  }
2873
+ const totalItems = itemsToCompact.length;
2874
+ const toolCalls = itemsToCompact.filter((it) => it.type === "function_call").length;
2875
+ noticeContent = `... conversation history compacted (${totalItems} items, ${toolCalls} tool calls) ...`;
2745
2876
  } finally {
2746
2877
  emitToListeners("SetCompacting", false);
2747
2878
  }
@@ -2750,41 +2881,38 @@ ${summary.trim()}`;
2750
2881
  return { noticeContent, itemsToAdd };
2751
2882
  }
2752
2883
 
2753
- // utils/sessions/modelContextLimits.ts
2754
- function getModelContextLimit(modelName) {
2755
- if (!modelName) {
2756
- return DEFAULT_LIMIT;
2757
- }
2758
- const name = modelName.toLowerCase();
2759
- if (name.startsWith("gpt-4o") || name.startsWith("gpt-5")) {
2760
- return 2e5;
2761
- }
2762
- if (name.startsWith("gpt-4")) {
2763
- return 128e3;
2764
- }
2765
- if (name.startsWith("claude-3.5") || name.startsWith("claude-3")) {
2766
- return 2e5;
2767
- }
2768
- if (name.startsWith("claude-4.6") || name.startsWith("claude-4.5")) {
2769
- return 1e6;
2770
- }
2771
- if (name.startsWith("gemini-1.5") || name.startsWith("gemini-3")) {
2772
- return 1e6;
2884
+ // utils/sessions/removeHarperInternalProviderData.ts
2885
+ function removeHarperInternalProviderData(it) {
2886
+ if (!it || typeof it !== "object") {
2887
+ return it;
2773
2888
  }
2774
- if (name.startsWith("gemini-")) {
2775
- return 128e3;
2889
+ const out = { ...it };
2890
+ if ("harper" in out) {
2891
+ try {
2892
+ delete out.harper;
2893
+ } catch {
2894
+ }
2776
2895
  }
2777
- if (name.startsWith("ollama-")) {
2778
- return 8e3;
2896
+ const pd = out.providerData && typeof out.providerData === "object" ? { ...out.providerData } : void 0;
2897
+ if (pd) {
2898
+ if ("harper" in pd) {
2899
+ try {
2900
+ delete pd.harper;
2901
+ } catch {
2902
+ }
2903
+ }
2904
+ if (Object.keys(pd).length === 0) {
2905
+ try {
2906
+ delete out.providerData;
2907
+ } catch {
2908
+ out.providerData = void 0;
2909
+ }
2910
+ } else {
2911
+ out.providerData = pd;
2912
+ }
2779
2913
  }
2780
- return DEFAULT_LIMIT;
2781
- }
2782
- function getCompactionTriggerTokens(modelName, fraction = 0.5) {
2783
- const limit = getModelContextLimit(modelName);
2784
- const f = Math.min(Math.max(fraction, 0.5), 0.95);
2785
- return Math.floor(limit * f);
2914
+ return out;
2786
2915
  }
2787
- var DEFAULT_LIMIT = 128e3;
2788
2916
 
2789
2917
  // utils/sessions/MemoryCompactionSession.ts
2790
2918
  var MemoryCompactionSession = class {
@@ -2859,7 +2987,7 @@ var MemoryCompactionSession = class {
2859
2987
  }
2860
2988
  async getItems(limit) {
2861
2989
  const items = await this.underlyingSession.getItems(limit);
2862
- return items.map((it) => sanitizeItem(it));
2990
+ return items.map((it) => removeHarperInternalProviderData(it));
2863
2991
  }
2864
2992
  /**
2865
2993
  * Returns the provider-stamped timestamp of the latest-added item, or null when unavailable.
@@ -2906,7 +3034,7 @@ var MemoryCompactionSession = class {
2906
3034
  * forcing flag is provided in args.
2907
3035
  * - If history is trivially small (<= 4 items), it skips compaction.
2908
3036
  * - Otherwise, it keeps the first item, adds a compaction notice (optionally
2909
- * summarized by the model), and retains the last 3 recent items.
3037
+ * summarized by the model), and retains the last ~3 recent items.
2910
3038
  */
2911
3039
  async runCompaction(args) {
2912
3040
  const items = await this.getItems();
@@ -2930,66 +3058,6 @@ var MemoryCompactionSession = class {
2930
3058
  return null;
2931
3059
  }
2932
3060
  };
2933
- function sanitizeItem(it) {
2934
- if (!it || typeof it !== "object") {
2935
- return it;
2936
- }
2937
- const out = { ...it };
2938
- if ("harper" in out) {
2939
- try {
2940
- delete out.harper;
2941
- } catch {
2942
- }
2943
- }
2944
- const pd = out.providerData && typeof out.providerData === "object" ? { ...out.providerData } : void 0;
2945
- if (pd) {
2946
- if ("harper" in pd) {
2947
- try {
2948
- delete pd.harper;
2949
- } catch {
2950
- }
2951
- }
2952
- if (Object.keys(pd).length === 0) {
2953
- try {
2954
- delete out.providerData;
2955
- } catch {
2956
- out.providerData = void 0;
2957
- }
2958
- } else {
2959
- out.providerData = pd;
2960
- }
2961
- }
2962
- return out;
2963
- }
2964
- function estimateTokens(items) {
2965
- let chars = 0;
2966
- for (const it of items) {
2967
- if (!it) {
2968
- continue;
2969
- }
2970
- if (Array.isArray(it.content)) {
2971
- for (const c of it.content) {
2972
- if (!c) {
2973
- continue;
2974
- }
2975
- if (typeof c.text === "string") {
2976
- chars += c.text.length;
2977
- } else if (typeof c.content === "string") {
2978
- chars += c.content.length;
2979
- } else if (typeof c === "string") {
2980
- chars += c.length;
2981
- }
2982
- }
2983
- }
2984
- if (typeof it.content === "string") {
2985
- chars += it.content.length;
2986
- }
2987
- if (typeof it.text === "string") {
2988
- chars += it.text.length;
2989
- }
2990
- }
2991
- return Math.ceil(chars / 4);
2992
- }
2993
3061
 
2994
3062
  // utils/sessions/createSession.ts
2995
3063
  function createSession(sessionPath = null) {
@@ -3760,9 +3828,25 @@ var AgentManager = class {
3760
3828
  }
3761
3829
  }
3762
3830
  this.isInitialized = true;
3831
+ if (trackedState.prompt?.trim?.()?.length) {
3832
+ trackedState.autonomous = true;
3833
+ setTimeout(
3834
+ curryEmitToListeners("PushNewMessages", [
3835
+ { type: "prompt", text: trackedState.prompt.trim(), version: 1 }
3836
+ ]),
3837
+ 500
3838
+ );
3839
+ } else if (!this.initialMessages?.length) {
3840
+ setTimeout(
3841
+ curryEmitToListeners("PushNewMessages", [
3842
+ { type: "agent", text: "What would you like to create together?", version: 1 }
3843
+ ]),
3844
+ 500
3845
+ );
3846
+ }
3763
3847
  }
3764
3848
  enqueueUserInput(text) {
3765
- if (typeof text === "string" && text.trim().length > 0) {
3849
+ if (text.trim().length > 0) {
3766
3850
  this.queuedUserInputs.push(text);
3767
3851
  }
3768
3852
  }
@@ -5519,6 +5603,21 @@ function DiffApprovalView() {
5519
5603
  result.push({ text: "Delete file: " + payload.path, color: "red" });
5520
5604
  return result;
5521
5605
  }
5606
+ if (payload.type === "overwrite_file") {
5607
+ for (const line of diffLines) {
5608
+ let color;
5609
+ if (line.startsWith("+")) {
5610
+ color = "green";
5611
+ } else if (line.startsWith("-")) {
5612
+ color = "red";
5613
+ }
5614
+ const wrapped = wrapText(line, size.columns - 4);
5615
+ for (const w of wrapped) {
5616
+ result.push({ text: w, color });
5617
+ }
5618
+ }
5619
+ return result;
5620
+ }
5522
5621
  if (payload.type === "code_interpreter" && payload.code) {
5523
5622
  const lines = payload.code.split("\n");
5524
5623
  for (const line of lines) {
@@ -6576,10 +6675,4 @@ function ensureApiKey() {
6576
6675
  }
6577
6676
  await agentManager.initialize();
6578
6677
  bootstrapMain();
6579
- if (trackedState.prompt?.trim?.()?.length) {
6580
- trackedState.autonomous = true;
6581
- emitToListeners("PushNewMessages", [
6582
- { type: "prompt", text: trackedState.prompt.trim(), version: 1 }
6583
- ]);
6584
- }
6585
6678
  })();
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "@harperfast/agent",
3
3
  "description": "AI to help you with Harper app management",
4
- "version": "0.13.8-ink",
4
+ "version": "0.14.0",
5
5
  "main": "dist/agent.js",
6
6
  "repository": "github:HarperFast/harper-agent",
7
7
  "bugs": {
@@ -11,7 +11,6 @@
11
11
  "scripts": {
12
12
  "dev": "tsup agent.ts --format esm --clean --dts --watch --external puppeteer",
13
13
  "link": "npm run build && npm link",
14
- "attempt": "(cd ~/Downloads && harper-agent -p \"lets make a random-lofi-noise-generator app, please\" --max-turns=8)",
15
14
  "build": "tsup agent.ts --format esm --clean --dts --external puppeteer",
16
15
  "commitlint": "commitlint --edit",
17
16
  "start": "node ./dist/agent.js",