@hasna/assistants 0.6.54 → 0.6.56

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (24) hide show
  1. package/README.md +4 -0
  2. package/dist/.assistants/commands/reflect.md +13 -0
  3. package/dist/.assistants/feedback/1c552736-058b-4a60-a67b-f0e7ec228f6d.json +8 -0
  4. package/dist/.assistants/feedback/324f7179-cb61-4591-803e-30877228610e.json +11 -0
  5. package/dist/.assistants/feedback/bf6c14e2-5264-4ecf-9858-05218e858546.json +8 -0
  6. package/dist/.assistants/projects/11a6d12e-a46f-4a6c-bbe8-bc58ebf2eddc.json +9 -0
  7. package/dist/.assistants/schedules/locks/4b758436-e846-441d-b1a2-aae03df551d1.lock.json +6 -0
  8. package/dist/.assistants/schedules/locks/ec03a634-6135-4bea-b9be-fb426095bba5.lock.json +6 -0
  9. package/dist/.assistants/scripts/f6be5a82-b00e-4b74-ba93-c610a7874776/ai_tweets_batch_10.json +92 -0
  10. package/dist/.assistants/scripts/f6be5a82-b00e-4b74-ba93-c610a7874776/ai_tweets_batch_11.json +82 -0
  11. package/dist/.assistants/scripts/f6be5a82-b00e-4b74-ba93-c610a7874776/ai_tweets_batch_12.json +92 -0
  12. package/dist/.assistants/scripts/f6be5a82-b00e-4b74-ba93-c610a7874776/ai_tweets_batch_13.json +82 -0
  13. package/dist/.assistants/scripts/f6be5a82-b00e-4b74-ba93-c610a7874776/ai_tweets_batch_2.json +102 -0
  14. package/dist/.assistants/scripts/f6be5a82-b00e-4b74-ba93-c610a7874776/ai_tweets_batch_3.json +82 -0
  15. package/dist/.assistants/scripts/f6be5a82-b00e-4b74-ba93-c610a7874776/ai_tweets_batch_4.json +82 -0
  16. package/dist/.assistants/scripts/f6be5a82-b00e-4b74-ba93-c610a7874776/ai_tweets_batch_5.json +82 -0
  17. package/dist/.assistants/scripts/f6be5a82-b00e-4b74-ba93-c610a7874776/ai_tweets_batch_6.json +82 -0
  18. package/dist/.assistants/scripts/f6be5a82-b00e-4b74-ba93-c610a7874776/ai_tweets_batch_7.json +82 -0
  19. package/dist/.assistants/scripts/f6be5a82-b00e-4b74-ba93-c610a7874776/ai_tweets_batch_8.json +82 -0
  20. package/dist/.assistants/scripts/f6be5a82-b00e-4b74-ba93-c610a7874776/ai_tweets_batch_9.json +92 -0
  21. package/dist/.assistants/scripts/f6be5a82-b00e-4b74-ba93-c610a7874776/twitter_archive_entries.json +102 -0
  22. package/dist/index.js +7623 -1563
  23. package/dist/index.js.map +51 -35
  24. package/package.json +3 -3
@@ -0,0 +1,102 @@
1
+ [
2
+ {
3
+ "name": "Apple Xcode AI Integration - Ohneis652",
4
+ "tweet_text": "What did we actually get with Xcode 26.3? Agentic coding → Native support for OpenAI Codex (GPT-5.2-Codex, GPT-5.1 mini) → Anthropic Claude Agent built in",
5
+ "ai_reply": "apple finally admits developers need more than pretty interfaces. agentic coding in xcode means the walled garden just got useful",
6
+ "human_comment": "",
7
+ "status": "Ready for Review",
8
+ "tweet_url": "https://twitter.com/ohneisserdemy/status/2019008763022217566",
9
+ "author": "@ohneisserdemy",
10
+ "created_date": "2026-02-04T11:22:55.000Z"
11
+ },
12
+ {
13
+ "name": "Apple Xcode OpenAI Integration - Taylor Mason",
14
+ "tweet_text": "🚨 BREAKING: Apple just added Anthropic and OpenAI agents to Xcode. Devs can now connect Claude or GPT directly to Apple's IDE via API key. Uses open standard — other AI tools can plug in too.",
15
+ "ai_reply": "took apple long enough. developers were already using claude in other editors while xcode stayed stuck in 2019",
16
+ "human_comment": "",
17
+ "status": "Ready for Review",
18
+ "tweet_url": "https://twitter.com/olafornot/status/2018895663321203023",
19
+ "author": "@olafornot",
20
+ "created_date": "2026-02-04T03:53:30.000Z"
21
+ },
22
+ {
23
+ "name": "OPUS AI Trading Agents - OpusClaude",
24
+ "tweet_text": "$OPUS AI agents' performance over the past few hours. ➡️ Fully automated. Running 24/7/365. ➡️ Learns from its own mistakes. ➡️ Adapts to market conditions. ➡️ Optimized for performance.",
25
+ "ai_reply": "autonomous trading bots that learn from mistakes. sounds great until they learn the wrong lessons from market chaos",
26
+ "human_comment": "",
27
+ "status": "Ready for Review",
28
+ "tweet_url": "https://twitter.com/OpusClaude/status/2018998273499942914",
29
+ "author": "@OpusClaude",
30
+ "created_date": "2026-02-04T10:41:14.000Z"
31
+ },
32
+ {
33
+ "name": "DataHaven AI Architecture - Melissa Lynn",
34
+ "tweet_text": "The coolest fact about DataHaven is its AI first architecture. Unlike old storage it is optimized for verifiable data integrity for AI agents and machine learning models. Built to handle the future of autonomous intelligence @DataHaven_xyz",
35
+ "ai_reply": "storage built for AI workloads makes sense. traditional databases choke on the data patterns these models create",
36
+ "human_comment": "",
37
+ "status": "Ready for Review",
38
+ "tweet_url": "https://twitter.com/Slaydaraa/status/2018984822983811347",
39
+ "author": "@Slaydaraa",
40
+ "created_date": "2026-02-04T09:47:47.000Z"
41
+ },
42
+ {
43
+ "name": "LLM Learning Journey - Coding_Sage",
44
+ "tweet_text": "Day 30 of #100DaysOfCode [5 Hours Grind 🔥] Learned about LLM agents and fine-tuning. Understood how prompts turn into agents, how models are inferenced, fine-tuned, tokenized, processed through transformers, and detokenized back into responses.",
45
+ "ai_reply": "understanding the tokenizer-transformer-detokenizer flow is where real AI engineering starts. most people skip the fundamentals",
46
+ "human_comment": "",
47
+ "status": "Ready for Review",
48
+ "tweet_url": "https://twitter.com/Coding_Sage/status/2018878732820459977",
49
+ "author": "@Coding_Sage",
50
+ "created_date": "2026-02-04T02:46:13.000Z"
51
+ },
52
+ {
53
+ "name": "Brumby LLM $4000 Build - seeeeaaaannnnnn",
54
+ "tweet_text": "We built the strongest attention-free base LLM, Brumby-14B-Base, for only $4,000. This is just a base model, but for...",
55
+ "ai_reply": "attention mechanisms eat compute like crazy. alternatives that work at this scale could change everything for smaller teams",
56
+ "human_comment": "",
57
+ "status": "Ready for Review",
58
+ "tweet_url": "https://twitter.com/seeeeaaaannnnnn/status/2018810417775280582",
59
+ "author": "@seeeeaaaannnnnn",
60
+ "created_date": "2026-02-03T22:14:45.000Z"
61
+ },
62
+ {
63
+ "name": "GitHub AI Engineering Resources - nrqa__",
64
+ "tweet_text": "10 GitHub repositories that will teach you more practical AI engineering than most paid courses: 1. AI Agents for Beginners...",
65
+ "ai_reply": "github repos teach what courses avoid: the messy reality of making AI work in production. theory is cheap, implementation is expensive",
66
+ "human_comment": "",
67
+ "status": "Ready for Review",
68
+ "tweet_url": "https://twitter.com/nrqa__/status/2019008041656410568",
69
+ "author": "@nrqa__",
70
+ "created_date": "2026-02-04T11:20:03.000Z"
71
+ },
72
+ {
73
+ "name": "API Search Volume Analysis - NolanAntonucci",
74
+ "tweet_text": "Search volume for 'API' queries for ChatGPT 🟦, Claude 🟥, OpenAI 🟩, Gemini 🟪 & general API key 🟨",
75
+ "ai_reply": "api search volume tells the real adoption story. developers vote with their code, not their tweets",
76
+ "human_comment": "",
77
+ "status": "Ready for Review",
78
+ "tweet_url": "https://twitter.com/NolanAntonucci/status/2018931348354814171",
79
+ "author": "@NolanAntonucci",
80
+ "created_date": "2026-02-04T06:15:18.000Z"
81
+ },
82
+ {
83
+ "name": "Anthropic Pricing Strategy Change - doyamarke",
84
+ "tweet_text": "Anthropicが価格戦略を変更しました。以前:・Claude Cowork → 月額100ドルのMaxのみ 現在:・Claude Cowork → 月額20ドルのProでも利用可能 オープンソースの競合が出てきたことへの対応だそうです。",
85
+ "ai_reply": "competition forces better pricing. anthropic felt the pressure from open source models and actually responded to users",
86
+ "human_comment": "",
87
+ "status": "Ready for Review",
88
+ "tweet_url": "https://twitter.com/doyamarke/status/2018988148886589504",
89
+ "author": "@doyamarke",
90
+ "created_date": "2026-02-04T10:01:00.000Z"
91
+ },
92
+ {
93
+ "name": "Building LLM from Scratch - the_sttts",
94
+ "tweet_text": "Challenge accepted 💪 Will talk about my Christmas holiday project NanoSchnack @ Cloud Native Heidelberg meetup, Feb 26. GPT-2 the Hard Way. Building a LLM from scratch. #transformers #attention #tokens #embeddings #training #pytorch #h100 #inference",
95
+ "ai_reply": "building transformers from scratch teaches you what really matters. most people use them without understanding the core mechanics",
96
+ "human_comment": "",
97
+ "status": "Ready for Review",
98
+ "tweet_url": "https://twitter.com/the_sttts/status/2018757153415110907",
99
+ "author": "@the_sttts",
100
+ "created_date": "2026-02-03T18:43:06.000Z"
101
+ }
102
+ ]