@buivietphi/skill-mobile-mt 1.4.0 → 1.4.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Potentially problematic release.
This version of @buivietphi/skill-mobile-mt might be problematic. Click here for more details.
- package/AGENTS.md +0 -5
- package/README.md +1 -4
- package/SKILL.md +1 -4
- package/bin/install.mjs +1 -1
- package/package.json +1 -1
- package/shared/on-device-ai.md +0 -175
package/AGENTS.md
CHANGED
|
@@ -62,7 +62,6 @@ skill-mobile-mt/
|
|
|
62
62
|
├── release-checklist.md ← App Store/Play Store checklist (587 tokens)
|
|
63
63
|
│
|
|
64
64
|
├── offline-first.md ← Local-first + sync patterns (2,566 tokens)
|
|
65
|
-
├── on-device-ai.md ← Core ML / TFLite / llama.cpp patterns (700 tokens)
|
|
66
65
|
│
|
|
67
66
|
├── ── TEMPLATES (copy to your project) ────────────────────
|
|
68
67
|
├── claude-md-template.md ← CLAUDE.md for Claude Code (copy to project root)
|
|
@@ -110,7 +109,6 @@ The agent reads the task, then decides which extra file to load:
|
|
|
110
109
|
| "Install this package / upgrade SDK" | `shared/version-management.md` |
|
|
111
110
|
| "Prepare for App Store / Play Store" | `shared/release-checklist.md` |
|
|
112
111
|
| "Weird issue, not sure why" | `shared/common-pitfalls.md` |
|
|
113
|
-
| "On-device AI / ML / inference" | `shared/on-device-ai.md` |
|
|
114
112
|
|
|
115
113
|
**Load cost:** +500 to +3,500 tokens per on-demand file.
|
|
116
114
|
|
|
@@ -178,7 +176,6 @@ skill:
|
|
|
178
176
|
# - shared/document-analysis.md
|
|
179
177
|
# - shared/release-checklist.md
|
|
180
178
|
# - shared/common-pitfalls.md
|
|
181
|
-
# - shared/on-device-ai.md
|
|
182
179
|
|
|
183
180
|
project:
|
|
184
181
|
description: "Read current project, adapt to its framework and conventions"
|
|
@@ -261,7 +258,6 @@ Every agent MUST follow this loading sequence:
|
|
|
261
258
|
- shared/observability.md (when adding logging, analytics, crash tracking)
|
|
262
259
|
- shared/common-pitfalls.md (when encountering unfamiliar errors)
|
|
263
260
|
- shared/release-checklist.md (when preparing for App Store/Play Store submission)
|
|
264
|
-
- shared/on-device-ai.md (when adding Core ML / TFLite / on-device inference)
|
|
265
261
|
|
|
266
262
|
7. SKIP non-matching platform subfolders (saves ~66% context)
|
|
267
263
|
```
|
|
@@ -283,7 +279,6 @@ Priority 6 (ON-DEMAND): shared/observability.md — Sessions as 4th pillar
|
|
|
283
279
|
Priority 6 (ON-DEMAND): shared/document-analysis.md — Parse images/PDFs → code
|
|
284
280
|
Priority 6 (ON-DEMAND): shared/release-checklist.md — Pre-release verification
|
|
285
281
|
Priority 6 (ON-DEMAND): shared/common-pitfalls.md — Known issue patterns
|
|
286
|
-
Priority 6 (ON-DEMAND): shared/on-device-ai.md — Core ML / TFLite / llama.cpp
|
|
287
282
|
```
|
|
288
283
|
|
|
289
284
|
---
|
package/README.md
CHANGED
|
@@ -259,10 +259,9 @@ iOS only?
|
|
|
259
259
|
| `shared/version-management.md` | 3,500 |
|
|
260
260
|
| `shared/observability.md` | 3,000 |
|
|
261
261
|
| `shared/offline-first.md` | 2,566 |
|
|
262
|
-
| `shared/on-device-ai.md` | 700 |
|
|
263
262
|
| `shared/claude-md-template.md` | ~500 |
|
|
264
263
|
| `shared/agent-rules-template.md` | ~2,500 |
|
|
265
|
-
| **Total** | **~
|
|
264
|
+
| **Total** | **~48,800** |
|
|
266
265
|
|
|
267
266
|
## Installed Structure
|
|
268
267
|
|
|
@@ -294,7 +293,6 @@ iOS only?
|
|
|
294
293
|
├── observability.md Sessions as 4th pillar
|
|
295
294
|
├── release-checklist.md Pre-release verification
|
|
296
295
|
├── offline-first.md Local-first + sync patterns
|
|
297
|
-
├── on-device-ai.md Core ML / TFLite / llama.cpp
|
|
298
296
|
├── claude-md-template.md CLAUDE.md template for projects
|
|
299
297
|
└── agent-rules-template.md Rules templates for all agents
|
|
300
298
|
```
|
|
@@ -390,7 +388,6 @@ your-project/
|
|
|
390
388
|
- **Anti-Pattern Detection** (`anti-patterns.md`): Detect PII leaks (CRITICAL), high cardinality tags, unbounded payloads, unstructured logs, sync telemetry on main thread — with auto-fix suggestions
|
|
391
389
|
- **Performance Prediction** (`performance-prediction.md`): Calculate frame budget, FlatList bridge calls, and memory usage BEFORE writing code. Example: `50 items × 3 bridge calls × 0.3ms = 45ms/frame → 22 FPS ❌ JANK`
|
|
392
390
|
- **Platform Excellence** (`platform-excellence.md`): iOS 18+ vs Android 15+ native UX standards — navigation patterns, typography, haptic feedback types, permission timing, ratings prompt flow, Live Activities/Dynamic Island, performance targets (cold start < 1s iOS, < 1.5s Android)
|
|
393
|
-
- **On-Device AI** (`on-device-ai.md`): Decision matrix (API vs on-device), Core ML (iOS), ML Kit + MediaPipe (Android), llama.cpp cross-platform, TFLite Flutter, React Native ML Kit — with performance rules and model size guidance
|
|
394
391
|
- **Version Management** (`version-management.md`): Full SDK compatibility matrix for RN 0.73-0.76, Expo 50-52, Flutter 3.22-3.27, iOS 16-18, Android 13-15. Check SDK compat BEFORE `npm install`. Release-mode testing protocol.
|
|
395
392
|
- **Observability** (`observability.md`): Sessions as the 4th pillar (Metrics + Logs + Traces + **Sessions**). Session lifecycle, enrichment API, unified instrumentation stack, correlation queries. Every event carries `session_id` for full user journey reconstruction.
|
|
396
393
|
|
package/SKILL.md
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: skill-mobile-mt
|
|
3
3
|
description: "Master Senior Mobile Engineer. Patterns from 30+ production repos (200k+ GitHub stars: Ignite, Expensify, Mattermost, Immich, AppFlowy, Now in Android, TCA). Use when: building mobile features, fixing mobile bugs, reviewing mobile code, mobile architecture, React Native, Flutter, iOS Swift, Android Kotlin, mobile performance, mobile security audit, mobile code review, app release. Two modes: (1) default = pre-built production patterns, (2) 'project' = reads current project and adapts."
|
|
4
|
-
version: "1.4.
|
|
4
|
+
version: "1.4.1"
|
|
5
5
|
author: buivietphi
|
|
6
6
|
priority: high
|
|
7
7
|
user-invocable: true
|
|
@@ -130,9 +130,6 @@ USER REQUEST → ACTION (Read tool required)
|
|
|
130
130
|
"Offline / cache / sync" → Read: shared/offline-first.md
|
|
131
131
|
then: implement local-first architecture
|
|
132
132
|
|
|
133
|
-
"On-device AI / ML / inference" → Read: shared/on-device-ai.md
|
|
134
|
-
then: choose Core ML / TFLite / llama.cpp per platform
|
|
135
|
-
|
|
136
133
|
```
|
|
137
134
|
|
|
138
135
|
**⛔ NEVER start coding without identifying the task type first.**
|
package/bin/install.mjs
CHANGED
|
@@ -70,7 +70,7 @@ const fail = m => log(` ${c.red}✗${c.reset} ${m}`);
|
|
|
70
70
|
|
|
71
71
|
function banner() {
|
|
72
72
|
log(`\n${c.bold}${c.cyan} ┌──────────────────────────────────────────────────┐`);
|
|
73
|
-
log(` │ 📱 @buivietphi/skill-mobile-mt v1.4.
|
|
73
|
+
log(` │ 📱 @buivietphi/skill-mobile-mt v1.4.1 │`);
|
|
74
74
|
log(` │ Master Senior Mobile Engineer │`);
|
|
75
75
|
log(` │ │`);
|
|
76
76
|
log(` │ Claude · Cline · Roo Code · Cursor · Windsurf │`);
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@buivietphi/skill-mobile-mt",
|
|
3
|
-
"version": "1.4.
|
|
3
|
+
"version": "1.4.1",
|
|
4
4
|
"description": "Master Senior Mobile Engineer skill for AI agents. Pre-built patterns from 18 production apps + local project adaptation. React Native, Flutter, iOS, Android. Supports Claude, Gemini, Kimi, Cursor, Copilot, Antigravity.",
|
|
5
5
|
"author": "buivietphi",
|
|
6
6
|
"license": "MIT",
|
package/shared/on-device-ai.md
DELETED
|
@@ -1,175 +0,0 @@
|
|
|
1
|
-
# On-Device AI — Mobile ML Integration
|
|
2
|
-
|
|
3
|
-
> On-demand. Load when: "on-device AI", "ML model", "Core ML", "TFLite", "MediaPipe", "llama", "inference", "local model"
|
|
4
|
-
> Source: llama.cpp, Core ML, MediaPipe, TensorFlow Lite
|
|
5
|
-
|
|
6
|
-
---
|
|
7
|
-
|
|
8
|
-
## Decision Matrix
|
|
9
|
-
|
|
10
|
-
```
|
|
11
|
-
Use case Solution
|
|
12
|
-
───────────────────────────────────────────────────────────
|
|
13
|
-
Image classification / OCR Core ML (iOS) / ML Kit (Android)
|
|
14
|
-
Text classification / sentiment Core ML NLP / ML Kit
|
|
15
|
-
Face detection / pose estimation Vision (iOS) / MediaPipe
|
|
16
|
-
On-device LLM chat (<7B params) llama.cpp / llama.rn / executorch
|
|
17
|
-
Cloud LLM (>7B / latest models) API call — don't run on device
|
|
18
|
-
Real-time object detection Core ML / TFLite + MediaPipe
|
|
19
|
-
Speech to text (on-device) SFSpeechRecognizer (iOS) / ML Kit (Android)
|
|
20
|
-
|
|
21
|
-
Rule: If model > 500MB → use API. If latency > 3s acceptable → use API.
|
|
22
|
-
```
|
|
23
|
-
|
|
24
|
-
---
|
|
25
|
-
|
|
26
|
-
## iOS — Core ML
|
|
27
|
-
|
|
28
|
-
```swift
|
|
29
|
-
// 1. Import model (drag .mlpackage into Xcode)
|
|
30
|
-
import CoreML
|
|
31
|
-
import Vision
|
|
32
|
-
|
|
33
|
-
// 2. Image classification
|
|
34
|
-
let model = try VNCoreMLModel(for: MyClassifier(configuration: .init()).model)
|
|
35
|
-
let request = VNCoreMLRequest(model: model) { request, _ in
|
|
36
|
-
guard let results = request.results as? [VNClassificationObservation] else { return }
|
|
37
|
-
let top = results.first!
|
|
38
|
-
print("\(top.identifier): \(top.confidence)")
|
|
39
|
-
}
|
|
40
|
-
let handler = VNImageRequestHandler(cgImage: image, options: [:])
|
|
41
|
-
try handler.perform([request])
|
|
42
|
-
|
|
43
|
-
// 3. NLP text classification
|
|
44
|
-
import NaturalLanguage
|
|
45
|
-
let classifier = NLModel(mlModel: SentimentClassifier().model)
|
|
46
|
-
let label = classifier.predictedLabel(for: "This is great!")
|
|
47
|
-
|
|
48
|
-
// Model conversion: use coremltools Python package
|
|
49
|
-
// coremltools.convert(pytorch_model, inputs=[...])
|
|
50
|
-
```
|
|
51
|
-
|
|
52
|
-
---
|
|
53
|
-
|
|
54
|
-
## Android — ML Kit + MediaPipe
|
|
55
|
-
|
|
56
|
-
```kotlin
|
|
57
|
-
// ML Kit — text recognition (no model download needed)
|
|
58
|
-
dependencies {
|
|
59
|
-
implementation("com.google.mlkit:text-recognition:16.0.0")
|
|
60
|
-
implementation("com.google.mlkit:face-detection:16.1.5")
|
|
61
|
-
}
|
|
62
|
-
|
|
63
|
-
val recognizer = TextRecognition.getClient(TextRecognizerOptions.DEFAULT_OPTIONS)
|
|
64
|
-
recognizer.process(inputImage)
|
|
65
|
-
.addOnSuccessListener { result -> result.text }
|
|
66
|
-
.addOnFailureListener { e -> /* handle */ }
|
|
67
|
-
|
|
68
|
-
// MediaPipe — pose / hand / face landmark detection
|
|
69
|
-
dependencies {
|
|
70
|
-
implementation("com.google.mediapipe:tasks-vision:0.10.14")
|
|
71
|
-
}
|
|
72
|
-
|
|
73
|
-
val handLandmarker = HandLandmarker.createFromOptions(context,
|
|
74
|
-
HandLandmarkerOptions.builder()
|
|
75
|
-
.setBaseOptions(BaseOptions.builder().setModelAssetPath("hand_landmarker.task").build())
|
|
76
|
-
.setNumHands(2)
|
|
77
|
-
.build()
|
|
78
|
-
)
|
|
79
|
-
```
|
|
80
|
-
|
|
81
|
-
---
|
|
82
|
-
|
|
83
|
-
## On-Device LLM — llama.cpp (Cross-Platform)
|
|
84
|
-
|
|
85
|
-
```
|
|
86
|
-
Model sizes (GGUF Q4_K_M quantization):
|
|
87
|
-
Llama 3.2 3B → ~2GB RAM ✅ Phone-friendly
|
|
88
|
-
Llama 3.1 8B → ~5GB RAM ⚠️ High-end only (iPhone 15 Pro, Pixel 9)
|
|
89
|
-
Llama 3.1 70B → ~40GB RAM ❌ Not feasible on device
|
|
90
|
-
|
|
91
|
-
Download: huggingface.co/models?search=gguf
|
|
92
|
-
```
|
|
93
|
-
|
|
94
|
-
```swift
|
|
95
|
-
// iOS — llama.swift
|
|
96
|
-
// https://github.com/ggerganov/llama.cpp (Swift bindings included)
|
|
97
|
-
import llama
|
|
98
|
-
|
|
99
|
-
let model = llama_load_model_from_file(modelPath, llama_model_default_params())
|
|
100
|
-
let ctx = llama_new_context_with_model(model, llama_context_default_params())
|
|
101
|
-
// Tokenize + run inference on background thread
|
|
102
|
-
```
|
|
103
|
-
|
|
104
|
-
```javascript
|
|
105
|
-
// React Native — llama.rn
|
|
106
|
-
// npm install llama.rn
|
|
107
|
-
import { LlamaContext } from 'llama.rn';
|
|
108
|
-
|
|
109
|
-
const context = await LlamaContext.create({
|
|
110
|
-
model: `${RNFS.DocumentDirectoryPath}/model.gguf`,
|
|
111
|
-
n_ctx: 2048,
|
|
112
|
-
n_threads: 4,
|
|
113
|
-
});
|
|
114
|
-
const result = await context.completion({ prompt: 'Hello!', n_predict: 100 });
|
|
115
|
-
```
|
|
116
|
-
|
|
117
|
-
```dart
|
|
118
|
-
// Flutter — flutter_llama (or use Platform.channel to llama.cpp)
|
|
119
|
-
// For production: use executorch (Meta) or llama.cpp via FFI
|
|
120
|
-
```
|
|
121
|
-
|
|
122
|
-
---
|
|
123
|
-
|
|
124
|
-
## React Native — ML Kit (via react-native-mlkit)
|
|
125
|
-
|
|
126
|
-
```javascript
|
|
127
|
-
// npm install @infinitered/react-native-mlkit-core
|
|
128
|
-
// npm install @infinitered/react-native-mlkit-object-detection
|
|
129
|
-
|
|
130
|
-
import { ObjectDetectionCamera } from '@infinitered/react-native-mlkit-object-detection';
|
|
131
|
-
|
|
132
|
-
// Image labeling
|
|
133
|
-
import MLKitImageLabeling from '@react-native-ml-kit/image-labeling';
|
|
134
|
-
const labels = await MLKitImageLabeling.label(imageUri);
|
|
135
|
-
// Returns: [{ text: 'Cat', confidence: 0.95 }]
|
|
136
|
-
```
|
|
137
|
-
|
|
138
|
-
---
|
|
139
|
-
|
|
140
|
-
## Flutter — tflite_flutter
|
|
141
|
-
|
|
142
|
-
```dart
|
|
143
|
-
// pubspec.yaml: tflite_flutter: ^0.10.4
|
|
144
|
-
import 'package:tflite_flutter/tflite_flutter.dart';
|
|
145
|
-
|
|
146
|
-
final interpreter = await Interpreter.fromAsset('model.tflite');
|
|
147
|
-
final input = [imageData]; // pre-processed tensor
|
|
148
|
-
final output = List.filled(1000, 0).reshape([1, 1000]);
|
|
149
|
-
interpreter.run(input, output);
|
|
150
|
-
// output[0] = probability for each class
|
|
151
|
-
```
|
|
152
|
-
|
|
153
|
-
---
|
|
154
|
-
|
|
155
|
-
## Performance Rules
|
|
156
|
-
|
|
157
|
-
```
|
|
158
|
-
1. NEVER run inference on the main thread
|
|
159
|
-
iOS: DispatchQueue.global(qos: .userInitiated).async { ... }
|
|
160
|
-
Android: viewModelScope.launch(Dispatchers.Default) { ... }
|
|
161
|
-
RN: run on JS thread or use NativeModule
|
|
162
|
-
|
|
163
|
-
2. Load model ONCE — cache in memory
|
|
164
|
-
❌ Load model on every inference call
|
|
165
|
-
✅ Load at app start or first use, keep reference
|
|
166
|
-
|
|
167
|
-
3. Batch requests when possible
|
|
168
|
-
- Process images in background queue, not per-tap
|
|
169
|
-
|
|
170
|
-
4. Show progress for operations >500ms
|
|
171
|
-
- Spinner or progress bar — user expects AI to take a moment
|
|
172
|
-
|
|
173
|
-
5. Fallback to API if device is low on memory
|
|
174
|
-
let memoryPressure = ProcessInfo.processInfo.isLowPowerModeEnabled
|
|
175
|
-
```
|