@buivietphi/skill-mobile-mt 1.2.0 → 1.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of @buivietphi/skill-mobile-mt might be problematic. Click here for more details.

@@ -0,0 +1,175 @@
1
+ # On-Device AI — Mobile ML Integration
2
+
3
+ > On-demand. Load when: "on-device AI", "ML model", "Core ML", "TFLite", "MediaPipe", "llama", "inference", "local model"
4
+ > Source: llama.cpp, Core ML, MediaPipe, TensorFlow Lite
5
+
6
+ ---
7
+
8
+ ## Decision Matrix
9
+
10
+ ```
11
+ Use case Solution
12
+ ───────────────────────────────────────────────────────────
13
+ Image classification / OCR Core ML (iOS) / ML Kit (Android)
14
+ Text classification / sentiment Core ML NLP / ML Kit
15
+ Face detection / pose estimation Vision (iOS) / MediaPipe
16
+ On-device LLM chat (<7B params) llama.cpp / llama.rn / executorch
17
+ Cloud LLM (>7B / latest models) API call — don't run on device
18
+ Real-time object detection Core ML / TFLite + MediaPipe
19
+ Speech to text (on-device) SFSpeechRecognizer (iOS) / ML Kit (Android)
20
+
21
+ Rule: If model > 500MB → use API. If latency > 3s acceptable → use API.
22
+ ```
23
+
24
+ ---
25
+
26
+ ## iOS — Core ML
27
+
28
+ ```swift
29
+ // 1. Import model (drag .mlpackage into Xcode)
30
+ import CoreML
31
+ import Vision
32
+
33
+ // 2. Image classification
34
+ let model = try VNCoreMLModel(for: MyClassifier(configuration: .init()).model)
35
+ let request = VNCoreMLRequest(model: model) { request, _ in
36
+ guard let results = request.results as? [VNClassificationObservation] else { return }
37
+ let top = results.first!
38
+ print("\(top.identifier): \(top.confidence)")
39
+ }
40
+ let handler = VNImageRequestHandler(cgImage: image, options: [:])
41
+ try handler.perform([request])
42
+
43
+ // 3. NLP text classification
44
+ import NaturalLanguage
45
+ let classifier = NLModel(mlModel: SentimentClassifier().model)
46
+ let label = classifier.predictedLabel(for: "This is great!")
47
+
48
+ // Model conversion: use coremltools Python package
49
+ // coremltools.convert(pytorch_model, inputs=[...])
50
+ ```
51
+
52
+ ---
53
+
54
+ ## Android — ML Kit + MediaPipe
55
+
56
+ ```kotlin
57
+ // ML Kit — text recognition (no model download needed)
58
+ dependencies {
59
+ implementation("com.google.mlkit:text-recognition:16.0.0")
60
+ implementation("com.google.mlkit:face-detection:16.1.5")
61
+ }
62
+
63
+ val recognizer = TextRecognition.getClient(TextRecognizerOptions.DEFAULT_OPTIONS)
64
+ recognizer.process(inputImage)
65
+ .addOnSuccessListener { result -> result.text }
66
+ .addOnFailureListener { e -> /* handle */ }
67
+
68
+ // MediaPipe — pose / hand / face landmark detection
69
+ dependencies {
70
+ implementation("com.google.mediapipe:tasks-vision:0.10.14")
71
+ }
72
+
73
+ val handLandmarker = HandLandmarker.createFromOptions(context,
74
+ HandLandmarkerOptions.builder()
75
+ .setBaseOptions(BaseOptions.builder().setModelAssetPath("hand_landmarker.task").build())
76
+ .setNumHands(2)
77
+ .build()
78
+ )
79
+ ```
80
+
81
+ ---
82
+
83
+ ## On-Device LLM — llama.cpp (Cross-Platform)
84
+
85
+ ```
86
+ Model sizes (GGUF Q4_K_M quantization):
87
+ Llama 3.2 3B → ~2GB RAM ✅ Phone-friendly
88
+ Llama 3.1 8B → ~5GB RAM ⚠️ High-end only (iPhone 15 Pro, Pixel 9)
89
+ Llama 3.1 70B → ~40GB RAM ❌ Not feasible on device
90
+
91
+ Download: huggingface.co/models?search=gguf
92
+ ```
93
+
94
+ ```swift
95
+ // iOS — llama.swift
96
+ // https://github.com/ggerganov/llama.cpp (Swift bindings included)
97
+ import llama
98
+
99
+ let model = llama_load_model_from_file(modelPath, llama_model_default_params())
100
+ let ctx = llama_new_context_with_model(model, llama_context_default_params())
101
+ // Tokenize + run inference on background thread
102
+ ```
103
+
104
+ ```javascript
105
+ // React Native — llama.rn
106
+ // npm install llama.rn
107
+ import { LlamaContext } from 'llama.rn';
108
+
109
+ const context = await LlamaContext.create({
110
+ model: `${RNFS.DocumentDirectoryPath}/model.gguf`,
111
+ n_ctx: 2048,
112
+ n_threads: 4,
113
+ });
114
+ const result = await context.completion({ prompt: 'Hello!', n_predict: 100 });
115
+ ```
116
+
117
+ ```dart
118
+ // Flutter — flutter_llama (or use Platform.channel to llama.cpp)
119
+ // For production: use executorch (Meta) or llama.cpp via FFI
120
+ ```
121
+
122
+ ---
123
+
124
+ ## React Native — ML Kit (via react-native-mlkit)
125
+
126
+ ```javascript
127
+ // npm install @infinitered/react-native-mlkit-core
128
+ // npm install @infinitered/react-native-mlkit-object-detection
129
+
130
+ import { ObjectDetectionCamera } from '@infinitered/react-native-mlkit-object-detection';
131
+
132
+ // Image labeling
133
+ import MLKitImageLabeling from '@react-native-ml-kit/image-labeling';
134
+ const labels = await MLKitImageLabeling.label(imageUri);
135
+ // Returns: [{ text: 'Cat', confidence: 0.95 }]
136
+ ```
137
+
138
+ ---
139
+
140
+ ## Flutter — tflite_flutter
141
+
142
+ ```dart
143
+ // pubspec.yaml: tflite_flutter: ^0.10.4
144
+ import 'package:tflite_flutter/tflite_flutter.dart';
145
+
146
+ final interpreter = await Interpreter.fromAsset('model.tflite');
147
+ final input = [imageData]; // pre-processed tensor
148
+ final output = List.filled(1000, 0).reshape([1, 1000]);
149
+ interpreter.run(input, output);
150
+ // output[0] = probability for each class
151
+ ```
152
+
153
+ ---
154
+
155
+ ## Performance Rules
156
+
157
+ ```
158
+ 1. NEVER run inference on the main thread
159
+ iOS: DispatchQueue.global(qos: .userInitiated).async { ... }
160
+ Android: viewModelScope.launch(Dispatchers.Default) { ... }
161
+ RN: run on JS thread or use NativeModule
162
+
163
+ 2. Load model ONCE — cache in memory
164
+ ❌ Load model on every inference call
165
+ ✅ Load at app start or first use, keep reference
166
+
167
+ 3. Batch requests when possible
168
+ - Process images in background queue, not per-tap
169
+
170
+ 4. Show progress for operations >500ms
171
+ - Spinner or progress bar — user expects AI to take a moment
172
+
173
+ 5. Fallback to API if device is low on memory
174
+ let memoryPressure = ProcessInfo.processInfo.isLowPowerModeEnabled
175
+ ```
@@ -144,6 +144,86 @@ export const LoginScreen = Platform.select({
144
144
 
145
145
  ---
146
146
 
147
+ ## iOS Haptics
148
+
149
+ ```swift
150
+ // 3 feedback types — use the right one
151
+ UIImpactFeedbackGenerator(style: .medium).impactOccurred() // button tap, card flip
152
+ UINotificationFeedbackGenerator().notificationOccurred(.success) // save success / error / warning
153
+ UISelectionFeedbackGenerator().selectionChanged() // picker scroll, toggle
154
+
155
+ // ✅ Rules
156
+ // - Impact: physical interactions (drag drop, button press)
157
+ // - Notification: outcomes (success, error, warning) — max 1 per action
158
+ // - Selection: discrete value changes (picker, slider step)
159
+ // ⛔ Never chain multiple haptics in <300ms
160
+ // ⛔ Never use for routine navigation (back, tab switch)
161
+ ```
162
+
163
+ ## Permission Timing (iOS/Android)
164
+
165
+ ```
166
+ RULE: Ask ONLY when the feature needs it — not at launch
167
+
168
+ Permission When to ask
169
+ ─────────────────────────────────────────────────────
170
+ Camera User taps "Take Photo" button
171
+ Location User taps "Find Nearby" or map feature
172
+ Contacts User taps "Invite from Contacts"
173
+ Notifications After onboarding, show a pre-permission dialog first
174
+ Microphone User taps "Record Voice Note"
175
+
176
+ PRE-PERMISSION DIALOG (iOS — before system prompt):
177
+ "Get notified when teammates reply"
178
+ [Allow] [Not now]
179
+ → Only show system prompt if user taps Allow
180
+ → Saves 1 chance at permission — don't waste it at cold start
181
+ ```
182
+
183
+ ## Ratings Timing
184
+
185
+ ```
186
+ // 2-step flow — ask only after success
187
+ Step 1: "Is [App] helping you get things done?"
188
+ [Yes!] [Not really]
189
+
190
+ Step 2 (if Yes): "Mind leaving a review? It helps us a lot."
191
+ [Sure] [Maybe later]
192
+ Step 2 (if No): "What's getting in the way?" [Give feedback]
193
+
194
+ // iOS: Use SKStoreReviewController.requestReview() — max 3x/year
195
+ // Android: Use ReviewManager from Play Core library
196
+ // NEVER ask after an error, payment, or on app cold start
197
+ ```
198
+
199
+ ## Live Activities / Dynamic Island (iOS 16.1+)
200
+
201
+ ```swift
202
+ // 1. Define attributes
203
+ struct DeliveryAttributes: ActivityAttributes {
204
+ struct ContentState: Codable, Hashable {
205
+ var status: String
206
+ var eta: Date
207
+ }
208
+ var orderId: String
209
+ }
210
+
211
+ // 2. Start activity
212
+ let initialState = DeliveryAttributes.ContentState(status: "Preparing", eta: Date())
213
+ let activity = try? Activity.request(
214
+ attributes: DeliveryAttributes(orderId: "123"),
215
+ content: .init(state: initialState, staleDate: nil)
216
+ )
217
+
218
+ // 3. Update
219
+ await activity?.update(.init(state: .init(status: "Out for delivery", eta: Date()), staleDate: nil))
220
+
221
+ // 4. End
222
+ await activity?.end(dismissalPolicy: .default)
223
+ ```
224
+
225
+ ---
226
+
147
227
  ## Anti-Patterns
148
228
 
149
229
  ```
@@ -151,9 +231,14 @@ export const LoginScreen = Platform.select({
151
231
  ❌ iOS navigation on Android
152
232
  ❌ Ignoring platform conventions
153
233
  ❌ "Write once, look mediocre everywhere"
234
+ ❌ Asking permissions at app launch
235
+ ❌ Chaining multiple haptics back-to-back
236
+ ❌ Rating prompt right after install
154
237
 
155
238
  ✅ Native look & feel per platform
156
239
  ✅ Shared logic, platform UI
157
240
  ✅ Respect platform guidelines
158
241
  ✅ "Write once, look native everywhere"
242
+ ✅ Ask permissions at the moment they're needed
243
+ ✅ Rating prompts only after clear success moments
159
244
  ```
@@ -66,6 +66,14 @@
66
66
  BUG: [description]
67
67
  FILE: [path]
68
68
 
69
+ <source_verification>
70
+ ⚠️ BEFORE analyzing — verify I have real data:
71
+ - [ ] Read the actual file (not guessing from error message)
72
+ - [ ] Verified function/class names exist (grep)
73
+ - [ ] Checked package versions in package.json/pubspec.yaml
74
+ - [ ] Identified data types from actual code (not assumed)
75
+ </source_verification>
76
+
69
77
  <context_needed>
70
78
  - Read [file] to understand current implementation
71
79
  - Grep for similar patterns: grep "[pattern]" src/
@@ -77,6 +85,7 @@ FILE: [path]
77
85
  - What code is executed?
78
86
  - What values are passed?
79
87
  - What conditions are checked?
88
+ SOURCE: [file:line where the bug is — cite exact location]
80
89
  </root_cause>
81
90
 
82
91
  <fix>
@@ -84,6 +93,7 @@ FILE: [path]
84
93
 
85
94
  WHY IT WORKS:
86
95
  [Explain the fix based on root cause]
96
+ SOURCE: [where this fix pattern comes from — project code / skill file / official docs]
87
97
  </fix>
88
98
 
89
99
  <side_effects>
@@ -165,6 +175,14 @@ BUG: App crashes when tapping product with no images
165
175
  FEATURE: [description]
166
176
  PLATFORM: [React Native / Flutter / iOS / Android]
167
177
 
178
+ <source_verification>
179
+ ⚠️ GROUNDING CHECK:
180
+ - [ ] Scanned actual project structure (not assumed)
181
+ - [ ] Found real reference feature (not imagined)
182
+ - [ ] Will read reference files before cloning pattern
183
+ - [ ] Will verify all imports/packages exist before using
184
+ </source_verification>
185
+
168
186
  STEP 1: SCAN PROJECT
169
187
  - List screens: ls src/screens/
170
188
  - Find similar features: grep -r "useState" src/screens/
@@ -620,6 +638,16 @@ Faster than reading documentation.
620
638
  "Create login screen following pattern in src/screens/ProductScreen
621
639
  using Redux slice from src/store/slices/authSlice
622
640
  with email/password fields + remember me checkbox"
641
+
642
+ ❌ Hallucinated answers (AI "intuition")
643
+ "Use react-native-awesome-picker for this" ← package may not exist
644
+ "Call the fetchUserData() method" ← function may not exist
645
+ "This API returns { data: [...] }" ← response shape may be wrong
646
+
647
+ ✅ Grounded answers (verified from source)
648
+ Read package.json → verify package exists → then suggest usage
649
+ Grep "fetchUser" src/ → find actual function → then reference it
650
+ Read API service file → check actual response shape → then use it
623
651
  ```
624
652
 
625
653
  ---
@@ -9,6 +9,23 @@
9
9
 
10
10
  ---
11
11
 
12
+ ## ⚠️ IMPORTANT: Always WebSearch for Latest
13
+
14
+ **The version matrix below is a SNAPSHOT and may be outdated.**
15
+ **Before installing ANY package or suggesting ANY version:**
16
+
17
+ ```
18
+ 1. WebSearch "[package] latest version [current year]"
19
+ 2. WebSearch "[package] [framework version] compatibility"
20
+ 3. Read the OFFICIAL changelog/migration guide
21
+ 4. Cross-reference with the matrix below
22
+
23
+ ⛔ NEVER rely solely on the matrix below — it was written at a point in time
24
+ ✅ ALWAYS verify with WebSearch for the most current information
25
+ ```
26
+
27
+ ---
28
+
12
29
  ## Version Matrix (React Native / Expo)
13
30
 
14
31
  ### Expo SDK Compatibility