ftc-mcp 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,1869 @@
1
+ "use strict";
2
+ Object.defineProperty(exports, "__esModule", { value: true });
3
+ exports.VISION_KNOWLEDGE = void 0;
4
+ exports.VISION_KNOWLEDGE = {
5
+ overview: `
6
+ ## FTC Vision Systems Overview
7
+
8
+ FTC teams have two primary vision options: **USB webcams** processed via the SDK's VisionPortal,
9
+ and the **Limelight 3A** smart camera. Both can detect AprilTags and game elements; they differ
10
+ in processing location, setup complexity, and capability.
11
+
12
+ ### Option 1: USB Webcam + VisionPortal (SDK Built-In)
13
+
14
+ The FTC SDK (v9.0+) includes VisionPortal — a built-in vision framework that runs on the
15
+ Control Hub's CPU. It processes frames from any UVC-compatible USB webcam.
16
+
17
+ **Capabilities:**
18
+ - AprilTag detection and 6DOF pose estimation (built-in AprilTagProcessor)
19
+ - Custom color/object detection via VisionProcessor interface + OpenCV
20
+ - Multiple processors running simultaneously on the same camera
21
+ - Camera controls: exposure, gain, focus, white balance
22
+ - Dual camera support (MultiPortal)
23
+ - Dashboard camera streaming
24
+
25
+ **When to use:**
26
+ - Budget-constrained teams (webcams cost $10-30)
27
+ - Need custom OpenCV pipelines (color detection, contour analysis)
28
+ - Running multiple detection processors simultaneously
29
+ - Need full control over the processing pipeline
30
+
31
+ **Recommended webcams:**
32
+ - Logitech C270 — cheapest, fixed focus, 640x480 max, good for AprilTags
33
+ - Logitech C920/C922 — adjustable focus, up to 1920x1080, better image quality
34
+ - Logitech C615 — compact, adjustable focus, good all-rounder
35
+ - Any UVC-compatible USB webcam works
36
+
37
+ ### Option 2: Limelight 3A (Smart Camera)
38
+
39
+ The Limelight 3A is a dedicated vision coprocessor with an integrated camera. It runs its own
40
+ processing pipeline independently from the Control Hub CPU. The FTC SDK treats it as a hardwareMap device.
41
+
42
+ **Capabilities:**
43
+ - AprilTag detection with MegaTag1/MegaTag2 robot localization
44
+ - Built-in color blob tracking (90 FPS at 640x480)
45
+ - Neural network object detection and classification
46
+ - Barcode/QR code reading
47
+ - Custom Python (SnapScript) pipelines with OpenCV + numpy
48
+ - 10 hot-swappable pipelines configurable via web UI
49
+ - Built-in FTC field map for the current season
50
+ - Zero Control Hub CPU load for vision processing
51
+
52
+ **When to use:**
53
+ - Want fast, reliable AprilTag localization (MegaTag2 is best-in-class)
54
+ - Need high FPS color tracking without impacting loop time
55
+ - Don't want to write custom OpenCV code
56
+ - Budget allows ($275 for the device)
57
+
58
+ **Specs:**
59
+ - Sensor: OV5647 (640x480 @ 90 FPS)
60
+ - FOV: 54.5° horizontal, 42° vertical
61
+ - Connection: USB-C to Control Hub USB 3.0 port
62
+ - Power: 4.1V-5.75V via USB, max 4W
63
+ - Boot time: ~15-20 seconds
64
+ - No LED illumination, no Google Coral neural accelerator
65
+
66
+ ### Can You Use Both?
67
+ Yes. A USB webcam with VisionPortal and a Limelight 3A can run simultaneously.
68
+ They use separate USB ports and separate processing (VisionPortal on Control Hub CPU,
69
+ Limelight on its own processor). Common pattern: Limelight for AprilTag localization,
70
+ webcam for custom color detection.
71
+ `,
72
+ visionPortalSetup: `
73
+ ## VisionPortal Setup — Complete Guide
74
+
75
+ VisionPortal is the SDK's built-in vision framework. It manages the camera lifecycle, routes
76
+ frames to processors, and handles the preview display.
77
+
78
+ ### Imports
79
+ \`\`\`java
80
+ import android.util.Size;
81
+ import org.firstinspires.ftc.robotcore.external.hardware.camera.WebcamName;
82
+ import org.firstinspires.ftc.robotcore.external.hardware.camera.BuiltinCameraDirection;
83
+ import org.firstinspires.ftc.vision.VisionPortal;
84
+ \`\`\`
85
+
86
+ ### Quick Setup (Defaults)
87
+ \`\`\`java
88
+ // One-liner with a single processor
89
+ VisionPortal portal = VisionPortal.easyCreateWithDefaults(
90
+ hardwareMap.get(WebcamName.class, "Webcam 1"),
91
+ aprilTagProcessor
92
+ );
93
+
94
+ // Multiple processors
95
+ VisionPortal portal = VisionPortal.easyCreateWithDefaults(
96
+ hardwareMap.get(WebcamName.class, "Webcam 1"),
97
+ aprilTagProcessor, myColorProcessor
98
+ );
99
+ \`\`\`
100
+
101
+ ### Builder Pattern (Full Control)
102
+ \`\`\`java
103
+ VisionPortal portal = new VisionPortal.Builder()
104
+ // Camera source — USB webcam or phone camera
105
+ .setCamera(hardwareMap.get(WebcamName.class, "Webcam 1"))
106
+ // OR: .setCamera(BuiltinCameraDirection.BACK) // phone camera
107
+
108
+ // Add one or more processors
109
+ .addProcessor(aprilTagProcessor)
110
+ .addProcessor(myColorProcessor) // optional second processor
111
+
112
+ // Resolution — must match a resolution the webcam supports
113
+ // Common choices: 320x240, 640x480, 800x600, 1280x720, 1920x1080
114
+ .setCameraResolution(new Size(640, 480))
115
+
116
+ // Stream format
117
+ // YUY2 (default): higher quality, higher USB bandwidth
118
+ // MJPEG: compressed, saves USB bandwidth (use for dual cameras)
119
+ .setStreamFormat(VisionPortal.StreamFormat.YUY2)
120
+
121
+ // LiveView — the camera preview on the Robot Controller screen
122
+ .enableLiveView(true) // true = show preview (default)
123
+ .setAutoStopLiveView(false) // if true, goes orange when no processors enabled
124
+
125
+ .build();
126
+ \`\`\`
127
+
128
+ ### Camera State Machine
129
+ \`\`\`
130
+ OPENING_CAMERA_DEVICE → CAMERA_DEVICE_READY → STARTING_STREAM → STREAMING
131
+
132
+ CAMERA_DEVICE_CLOSED ← CLOSING ← STOPPING_STREAM
133
+ \`\`\`
134
+
135
+ Camera controls (exposure, gain) require the camera to be in STREAMING state.
136
+ Always wait for streaming before setting controls:
137
+ \`\`\`java
138
+ // Wait for camera to start streaming
139
+ while (!isStopRequested() &&
140
+ portal.getCameraState() != VisionPortal.CameraState.STREAMING) {
141
+ sleep(20);
142
+ }
143
+ // NOW safe to set camera controls
144
+ \`\`\`
145
+
146
+ ### Lifecycle Management — 4 Levels of CPU Control
147
+ \`\`\`java
148
+ // Level 1: Toggle LiveView only (fastest resume, minimal CPU savings)
149
+ portal.stopLiveView();
150
+ portal.resumeLiveView();
151
+
152
+ // Level 2: Toggle processors (fast resume, good CPU savings)
153
+ portal.setProcessorEnabled(aprilTagProcessor, false); // disable
154
+ portal.setProcessorEnabled(aprilTagProcessor, true); // re-enable
155
+
156
+ // Level 3: Stop/resume streaming (~1 second resume time)
157
+ portal.stopStreaming(); // stops all background processing
158
+ portal.resumeStreaming(); // restarts camera stream
159
+
160
+ // Level 4: Close portal (permanent — cannot reopen)
161
+ portal.close(); // call only when completely done with vision
162
+ \`\`\`
163
+
164
+ **Best practice:** Use Level 2 (processor toggling) during a match. Disable AprilTag
165
+ during TeleOp driving, enable it when you need alignment. Avoid Level 3 unless you
166
+ have a long period without vision (the ~1s resume time can cost you in auto).
167
+
168
+ ### Monitoring
169
+ \`\`\`java
170
+ // Frame rate
171
+ float fps = portal.getFps();
172
+
173
+ // Camera state
174
+ VisionPortal.CameraState state = portal.getCameraState();
175
+
176
+ // Processor status
177
+ boolean enabled = portal.getProcessorEnabled(aprilTagProcessor);
178
+ \`\`\`
179
+
180
+ ### Stream Format Guidance
181
+ | Format | Bandwidth | Quality | Best For |
182
+ |---|---|---|---|
183
+ | YUY2 (default) | High | Best | Single camera, plenty of USB bandwidth |
184
+ | MJPEG | Low | Good (compressed) | Dual cameras, shared USB hub, Dashboard streaming |
185
+
186
+ **Note:** Very low resolutions (<432x240) with MJPEG may have poor image quality.
187
+ Very high resolutions (>640x480) with YUY2 may exceed USB 2.0 bandwidth on shared hub.
188
+
189
+ ### Discovering Supported Resolutions
190
+ If you don't know what resolutions your webcam supports, intentionally request an
191
+ invalid resolution (like 1x1). The error message will list all supported resolutions:
192
+ \`\`\`java
193
+ .setCameraResolution(new Size(1, 1)) // Will fail with list of supported sizes
194
+ \`\`\`
195
+ `,
196
+ aprilTagDetection: `
197
+ ## AprilTag Detection — Complete Guide
198
+
199
+ The FTC SDK includes a built-in AprilTagProcessor that detects AprilTag fiducial markers
200
+ and computes 6DOF (six degrees of freedom) pose estimation. The current FTC season uses
201
+ the TAG_36h11 family.
202
+
203
+ ### Imports
204
+ \`\`\`java
205
+ import org.firstinspires.ftc.vision.apriltag.AprilTagProcessor;
206
+ import org.firstinspires.ftc.vision.apriltag.AprilTagDetection;
207
+ import org.firstinspires.ftc.vision.apriltag.AprilTagGameDatabase;
208
+ import org.firstinspires.ftc.vision.apriltag.AprilTagLibrary;
209
+ import org.firstinspires.ftc.vision.apriltag.AprilTagPoseFtc;
210
+ import org.firstinspires.ftc.robotcore.external.navigation.DistanceUnit;
211
+ import org.firstinspires.ftc.robotcore.external.navigation.AngleUnit;
212
+ \`\`\`
213
+
214
+ ### Quick Setup
215
+ \`\`\`java
216
+ AprilTagProcessor aprilTag = AprilTagProcessor.easyCreateWithDefaults();
217
+ \`\`\`
218
+
219
+ ### Builder Pattern (Full Customization)
220
+ \`\`\`java
221
+ AprilTagProcessor aprilTag = new AprilTagProcessor.Builder()
222
+ // Tag family — TAG_36h11 for FTC (default)
223
+ .setTagFamily(AprilTagProcessor.TagFamily.TAG_36h11)
224
+
225
+ // Tag library — includes current season field tags + sample tags
226
+ .setTagLibrary(AprilTagGameDatabase.getCurrentGameTagLibrary())
227
+
228
+ // Output units (default: INCH, DEGREES)
229
+ .setOutputUnits(DistanceUnit.INCH, AngleUnit.DEGREES)
230
+
231
+ // Lens intrinsics (fx, fy, cx, cy) — override if SDK doesn't have your webcam's calibration
232
+ // Omit this to use the SDK's built-in calibration for recognized cameras
233
+ // .setLensIntrinsics(578.272, 578.272, 402.145, 221.506)
234
+
235
+ // Annotation drawing options (what to draw on the preview)
236
+ .setDrawTagID(true) // Show tag ID number
237
+ .setDrawTagOutline(true) // Draw outline around tag
238
+ .setDrawAxes(false) // Draw XYZ axes (costly, disable in competition)
239
+ .setDrawCubeProjection(false) // Draw 3D cube projection (costly, disable in competition)
240
+
241
+ .build();
242
+ \`\`\`
243
+
244
+ ### Decimation — THE Key Performance Setting
245
+ Decimation downsamples the image before tag detection, trading range for speed.
246
+ This is the single most impactful performance setting for AprilTag.
247
+
248
+ \`\`\`java
249
+ // Can be changed at ANY time during runtime (not just at build)
250
+ aprilTag.setDecimation(3); // default is 3
251
+ \`\`\`
252
+
253
+ **Decimation trade-offs (Logitech C920 at 640x480):**
254
+ | Decimation | FPS | Detect 2" tag from | Detect 5" tag from |
255
+ |---|---|---|---|
256
+ | 1 | ~10 | 10 feet | 25+ feet |
257
+ | 2 | ~22 | 6 feet | 15 feet |
258
+ | 3 (default) | ~30 | 4 feet | 10 feet |
259
+
260
+ **Strategy:**
261
+ - Use decimation=1 during init (static robot, max range, FPS doesn't matter)
262
+ - Switch to decimation=3 during driving (need high FPS, closer range OK)
263
+ - Switch to decimation=2 for final alignment (balance of range and speed)
264
+
265
+ ### Reading Detections
266
+ \`\`\`java
267
+ List<AprilTagDetection> detections = aprilTag.getDetections();
268
+
269
+ for (AprilTagDetection detection : detections) {
270
+ int tagId = detection.id;
271
+
272
+ if (detection.metadata != null) {
273
+ // Tag IS in the library — full 6DOF pose available
274
+ String tagName = detection.metadata.name;
275
+
276
+ // Camera-relative pose (AprilTagPoseFtc):
277
+ double x = detection.ftcPose.x; // lateral offset (inches, right = positive)
278
+ double y = detection.ftcPose.y; // forward distance (inches)
279
+ double z = detection.ftcPose.z; // vertical offset (inches, up = positive)
280
+ double yaw = detection.ftcPose.yaw; // heading rotation (degrees)
281
+ double pitch = detection.ftcPose.pitch; // pitch rotation (degrees)
282
+ double roll = detection.ftcPose.roll; // roll rotation (degrees)
283
+
284
+ // Spherical coordinates (often more useful for driving):
285
+ double range = detection.ftcPose.range; // direct distance to tag (inches)
286
+ double bearing = detection.ftcPose.bearing; // horizontal angle to tag (degrees)
287
+ double elevation = detection.ftcPose.elevation; // vertical angle to tag (degrees)
288
+ } else {
289
+ // Tag NOT in library — only pixel center available (no pose)
290
+ double centerX = detection.center.x; // pixels
291
+ double centerY = detection.center.y; // pixels
292
+ }
293
+ }
294
+ \`\`\`
295
+
296
+ ### AprilTagPoseFtc Coordinate System
297
+ \`\`\`
298
+ +Z (up)
299
+ |
300
+ |
301
+ +--- +X (right)
302
+ /
303
+ /
304
+ +Y (forward, out of camera)
305
+ \`\`\`
306
+ - **Y axis** = straight out from the camera lens (distance)
307
+ - **X axis** = to the right of the camera
308
+ - **Z axis** = upward from the camera
309
+ - **range** = sqrt(x² + y²) = direct line-of-sight distance
310
+ - **bearing** = horizontal angle from camera centerline to tag
311
+ - **elevation** = vertical angle from camera centerline to tag
312
+
313
+ ### Custom Tag Libraries
314
+ Add your own tags alongside the season's field tags:
315
+ \`\`\`java
316
+ AprilTagLibrary myLibrary = new AprilTagLibrary.Builder()
317
+ .addTags(AprilTagGameDatabase.getCurrentGameTagLibrary()) // include season tags
318
+ .addTag(55, "Team Marker", 3.5, DistanceUnit.INCH) // add custom tag
319
+ .build();
320
+
321
+ AprilTagProcessor aprilTag = new AprilTagProcessor.Builder()
322
+ .setTagLibrary(myLibrary)
323
+ .build();
324
+ \`\`\`
325
+
326
+ **Tag metadata includes:**
327
+ - \`id\` — integer ID code
328
+ - \`name\` — human-readable name
329
+ - \`tagsize\` — size across the inner black border (needed for pose estimation)
330
+ - \`fieldPosition\` / \`fieldOrientation\` — optional field-relative position for localization
331
+
332
+ ### Complete AprilTag Detection Example
333
+ \`\`\`java
334
+ @TeleOp(name = "AprilTag Detection", group = "Vision")
335
+ public class AprilTagExample extends LinearOpMode {
336
+ @Override
337
+ public void runOpMode() {
338
+ AprilTagProcessor aprilTag = new AprilTagProcessor.Builder()
339
+ .setDrawAxes(true)
340
+ .setDrawTagOutline(true)
341
+ .setOutputUnits(DistanceUnit.INCH, AngleUnit.DEGREES)
342
+ .build();
343
+
344
+ VisionPortal portal = new VisionPortal.Builder()
345
+ .setCamera(hardwareMap.get(WebcamName.class, "Webcam 1"))
346
+ .addProcessor(aprilTag)
347
+ .setCameraResolution(new Size(640, 480))
348
+ .build();
349
+
350
+ // Wait for camera to start streaming
351
+ while (!isStopRequested() &&
352
+ portal.getCameraState() != VisionPortal.CameraState.STREAMING) {
353
+ sleep(20);
354
+ }
355
+
356
+ // Set low exposure for reduced motion blur
357
+ ExposureControl exposure = portal.getCameraControl(ExposureControl.class);
358
+ exposure.setMode(ExposureControl.Mode.Manual);
359
+ exposure.setExposure(6, TimeUnit.MILLISECONDS);
360
+ GainControl gain = portal.getCameraControl(GainControl.class);
361
+ gain.setGain(250);
362
+
363
+ // High decimation during init for max range
364
+ aprilTag.setDecimation(1);
365
+
366
+ waitForStart();
367
+
368
+ // Switch to faster decimation during driving
369
+ aprilTag.setDecimation(3);
370
+
371
+ while (opModeIsActive()) {
372
+ List<AprilTagDetection> detections = aprilTag.getDetections();
373
+ telemetry.addData("Tags Detected", detections.size());
374
+
375
+ for (AprilTagDetection det : detections) {
376
+ if (det.metadata != null) {
377
+ telemetry.addLine(String.format("\\nTag %d \\"%s\\"", det.id, det.metadata.name));
378
+ telemetry.addData(" Range", "%.1f inches", det.ftcPose.range);
379
+ telemetry.addData(" Bearing", "%.1f degrees", det.ftcPose.bearing);
380
+ telemetry.addData(" Yaw", "%.1f degrees", det.ftcPose.yaw);
381
+ }
382
+ }
383
+ telemetry.addData("FPS", "%.1f", portal.getFps());
384
+ telemetry.update();
385
+ sleep(20);
386
+ }
387
+ portal.close();
388
+ }
389
+ }
390
+ \`\`\`
391
+ `,
392
+ cameraControls: `
393
+ ## Camera Controls — Exposure, Gain, Focus, White Balance
394
+
395
+ USB webcam controls are critical for reliable vision. Low exposure reduces motion blur
396
+ (critical for AprilTags during driving), and manual white balance ensures consistent color detection.
397
+
398
+ **IMPORTANT:** Camera controls are only available for USB webcams, NOT phone cameras or Limelight.
399
+ Controls require the camera to be in STREAMING state — always wait for streaming first.
400
+
401
+ ### Imports
402
+ \`\`\`java
403
+ import org.firstinspires.ftc.robotcore.external.hardware.camera.controls.ExposureControl;
404
+ import org.firstinspires.ftc.robotcore.external.hardware.camera.controls.GainControl;
405
+ import org.firstinspires.ftc.robotcore.external.hardware.camera.controls.FocusControl;
406
+ import org.firstinspires.ftc.robotcore.external.hardware.camera.controls.WhiteBalanceControl;
407
+ import org.firstinspires.ftc.robotcore.external.hardware.camera.controls.PtzcControl;
408
+ import java.util.concurrent.TimeUnit;
409
+ \`\`\`
410
+
411
+ ### Wait for Camera Ready
412
+ \`\`\`java
413
+ // MUST wait for STREAMING state before accessing any camera control
414
+ while (!isStopRequested() &&
415
+ portal.getCameraState() != VisionPortal.CameraState.STREAMING) {
416
+ sleep(20);
417
+ }
418
+ \`\`\`
419
+
420
+ ### Exposure Control (Most Important for AprilTags)
421
+ \`\`\`java
422
+ ExposureControl exposure = portal.getCameraControl(ExposureControl.class);
423
+
424
+ // Switch to manual mode (required before setting exposure)
425
+ exposure.setMode(ExposureControl.Mode.Manual);
426
+ sleep(50); // allow mode switch to take effect
427
+
428
+ // Set exposure time in milliseconds
429
+ exposure.setExposure(6, TimeUnit.MILLISECONDS);
430
+ sleep(20);
431
+
432
+ // Get supported range
433
+ long minExposure = exposure.getMinExposure(TimeUnit.MILLISECONDS); // typically 0-1
434
+ long maxExposure = exposure.getMaxExposure(TimeUnit.MILLISECONDS); // typically 200-250
435
+
436
+ // Check if manual mode is supported
437
+ boolean canManual = exposure.isModeSupported(ExposureControl.Mode.Manual);
438
+ \`\`\`
439
+
440
+ **Exposure guidelines:**
441
+ - **5-6 ms** — Best for AprilTag detection while driving. Very little motion blur.
442
+ Image will be dark, compensate with gain.
443
+ - **10-15 ms** — Good balance for color detection. Some motion blur at speed.
444
+ - **30+ ms** — Bright image, significant motion blur. Only use when robot is stationary.
445
+ - **Auto mode** — Camera adjusts itself. Inconsistent. Avoid for competition.
446
+
447
+ ### Gain Control (Brightness Compensation)
448
+ \`\`\`java
449
+ GainControl gain = portal.getCameraControl(GainControl.class);
450
+ gain.setGain(250); // 0 to 255 typically
451
+ sleep(20);
452
+
453
+ // Get supported range
454
+ int minGain = gain.getMinGain(); // typically 0
455
+ int maxGain = gain.getMaxGain(); // typically 255
456
+ \`\`\`
457
+
458
+ **Gain guidelines:**
459
+ - Higher gain = brighter image, more noise
460
+ - Lower gain = darker image, less noise
461
+ - For low exposure (5-6ms), set gain to 200-250 to compensate for darkness
462
+ - High gain + low exposure is better than high exposure for moving robots
463
+
464
+ ### Focus Control
465
+ \`\`\`java
466
+ FocusControl focus = portal.getCameraControl(FocusControl.class);
467
+
468
+ // Fixed focus at infinity (best for AprilTags)
469
+ focus.setMode(FocusControl.Mode.Fixed);
470
+ focus.setFocusLength(0); // 0 = infinity on most webcams
471
+ sleep(20);
472
+
473
+ // OR auto-focus (inconsistent, not recommended for competition)
474
+ focus.setMode(FocusControl.Mode.Auto);
475
+
476
+ // Get supported range
477
+ double minFocus = focus.getMinFocusLength(); // 0
478
+ double maxFocus = focus.getMaxFocusLength(); // 250 typically
479
+ \`\`\`
480
+
481
+ **Focus guidelines:**
482
+ - **Fixed at 0 (infinity)** — Best for AprilTags and distant objects. Always consistent.
483
+ - **Auto-focus** — Can hunt and cause missed detections. Avoid in competition.
484
+ - Logitech C270 has fixed focus (no adjustment needed — it's always at infinity).
485
+ - Logitech C920/C922 have adjustable focus — MUST set to fixed in code.
486
+
487
+ ### White Balance Control (Critical for Color Detection)
488
+ \`\`\`java
489
+ WhiteBalanceControl wb = portal.getCameraControl(WhiteBalanceControl.class);
490
+
491
+ // Manual white balance (consistent color detection)
492
+ wb.setMode(WhiteBalanceControl.Mode.MANUAL);
493
+ wb.setWhiteBalanceTemperature(4600); // Kelvin (neutral white)
494
+ sleep(20);
495
+
496
+ // Get supported range
497
+ int minTemp = wb.getMinWhiteBalanceTemperature(); // ~2000
498
+ int maxTemp = wb.getMaxWhiteBalanceTemperature(); // ~6500
499
+ \`\`\`
500
+
501
+ **White balance guidelines:**
502
+ - **Manual ~4000-5000K** — Best for consistent color detection under venue lighting
503
+ - **Auto** — Camera adjusts to ambient light. Colors shift between venues. Avoid for competition.
504
+ - Tune white balance at the actual competition venue if possible
505
+ - Indoor fluorescent: ~4000K. Indoor LED: ~5000K. Outdoor/daylight: ~6500K.
506
+
507
+ ### Recommended Competition Settings
508
+ \`\`\`java
509
+ // For AprilTag detection during driving (low blur, moderate brightness)
510
+ private void setAprilTagExposure(VisionPortal portal) {
511
+ ExposureControl exposure = portal.getCameraControl(ExposureControl.class);
512
+ exposure.setMode(ExposureControl.Mode.Manual);
513
+ sleep(50);
514
+ exposure.setExposure(6, TimeUnit.MILLISECONDS);
515
+ sleep(20);
516
+ GainControl gain = portal.getCameraControl(GainControl.class);
517
+ gain.setGain(250);
518
+ sleep(20);
519
+ }
520
+
521
+ // For color detection (brighter, consistent white balance)
522
+ private void setColorDetectionSettings(VisionPortal portal) {
523
+ ExposureControl exposure = portal.getCameraControl(ExposureControl.class);
524
+ exposure.setMode(ExposureControl.Mode.Manual);
525
+ sleep(50);
526
+ exposure.setExposure(15, TimeUnit.MILLISECONDS);
527
+ sleep(20);
528
+ GainControl gain = portal.getCameraControl(GainControl.class);
529
+ gain.setGain(150);
530
+ sleep(20);
531
+ WhiteBalanceControl wb = portal.getCameraControl(WhiteBalanceControl.class);
532
+ wb.setMode(WhiteBalanceControl.Mode.MANUAL);
533
+ wb.setWhiteBalanceTemperature(4600);
534
+ sleep(20);
535
+ }
536
+ \`\`\`
537
+
538
+ ### Logitech C920 Reference Ranges
539
+ | Control | Min | Max | Default |
540
+ |---|---|---|---|
541
+ | Exposure | 0 ms | 204 ms | Auto |
542
+ | Gain | 0 | 255 | Auto |
543
+ | White Balance | 2000K | 6500K | Auto |
544
+ | Focus | 0 | 250 | 0 (infinity) |
545
+ | Zoom | 100 | 500 | 100 |
546
+ `,
547
+ limelight: `
548
+ ## Limelight 3A — Complete FTC Guide
549
+
550
+ The Limelight 3A is a dedicated vision coprocessor with an integrated camera. It connects via
551
+ USB-C and appears as a network device in the FTC hardwareMap. All image processing runs on the
552
+ Limelight's processor — zero CPU load on the Control Hub.
553
+
554
+ ### Hardware Setup
555
+ 1. Mount the Limelight 3A on the robot (4x #10 THRU, 4x M3, or 6x M4 mounting holes)
556
+ 2. Connect USB-C cable from Limelight to the Control Hub's **USB 3.0 port** (blue port)
557
+ 3. Power comes through USB (4.1V-5.75V, max 4W)
558
+ 4. Wait for green status light (~15-20 seconds boot)
559
+ 5. Connect to the Control Hub's WiFi, then access the web UI at \`http://limelight.local:5801\`
560
+
561
+ ### Hardware Map Configuration
562
+ On the Driver Station:
563
+ 1. Click "Configure Robot"
564
+ 2. Click "Scan" — an "Ethernet Device" will appear
565
+ 3. Rename it to \`"limelight"\`
566
+ 4. Save the configuration
567
+
568
+ ### Imports
569
+ \`\`\`java
570
+ import com.qualcomm.hardware.limelightvision.LLResult;
571
+ import com.qualcomm.hardware.limelightvision.LLResultTypes;
572
+ import com.qualcomm.hardware.limelightvision.LLStatus;
573
+ import com.qualcomm.hardware.limelightvision.Limelight3A;
574
+ import org.firstinspires.ftc.robotcore.external.navigation.Pose3D;
575
+ import org.firstinspires.ftc.robotcore.external.navigation.AngleUnit;
576
+ \`\`\`
577
+
578
+ ### Basic Setup
579
+ \`\`\`java
580
+ Limelight3A limelight = hardwareMap.get(Limelight3A.class, "limelight");
581
+
582
+ // Set how frequently to poll for data (100 = maximum freshness)
583
+ limelight.setPollRateHz(100);
584
+
585
+ // Select which pipeline to use (0-9, configured in web UI)
586
+ limelight.pipelineSwitch(0);
587
+
588
+ // Start polling — REQUIRED. getLatestResult() returns null without this.
589
+ limelight.start();
590
+ \`\`\`
591
+
592
+ ### Available Pipeline Types (Configured in Web UI)
593
+ | Pipeline Type | Description | FPS |
594
+ |---|---|---|
595
+ | **AprilTag / Fiducial** | Detects AprilTags, computes 6DOF pose, MegaTag localization | 20-50 |
596
+ | **Color Blob** | Tracks colored objects by HSV range | 90 |
597
+ | **Neural Network Detector** | Object detection using ML model (CPU inference) | 15-30 |
598
+ | **Neural Network Classifier** | Image classification using ML model | 15-30 |
599
+ | **Barcode / QR Code** | Reads barcodes and QR codes | 30-60 |
600
+ | **Python SnapScript** | Custom OpenCV + numpy pipeline | Varies |
601
+
602
+ ### Reading Results — Universal Pattern
603
+ \`\`\`java
604
+ LLResult result = limelight.getLatestResult();
605
+ if (result != null && result.isValid()) {
606
+ // Basic targeting (works for ALL pipeline types)
607
+ double tx = result.getTx(); // horizontal offset to target (degrees)
608
+ double ty = result.getTy(); // vertical offset to target (degrees)
609
+ double ta = result.getTa(); // target area (0-100% of image)
610
+
611
+ // Latency
612
+ double captureLatency = result.getCaptureLatency(); // ms from exposure to tracking
613
+ double targetingLatency = result.getTargetingLatency(); // ms for tracking computation
614
+ double totalLatency = captureLatency + targetingLatency;
615
+
616
+ // Data freshness
617
+ double staleness = result.getStaleness(); // age in ms (< 100ms is fresh)
618
+ }
619
+ \`\`\`
620
+
621
+ ### AprilTag / Fiducial Results
622
+ \`\`\`java
623
+ List<LLResultTypes.FiducialResult> fiducials = result.getFiducialResults();
624
+ for (LLResultTypes.FiducialResult fr : fiducials) {
625
+ int tagId = fr.getFiducialId();
626
+ String family = fr.getFamily(); // "36H11C" for FTC tags
627
+ double tx = fr.getTargetXDegrees(); // horizontal offset
628
+ double ty = fr.getTargetYDegrees(); // vertical offset
629
+ double area = fr.getTargetArea(); // target area (0-100)
630
+
631
+ // 3D pose data (requires "Full 3D" enabled in pipeline Advanced tab)
632
+ Pose3D robotPoseTag = fr.getRobotPoseTargetSpace(); // robot relative to this tag
633
+ Pose3D cameraPoseTag = fr.getCameraPoseTargetSpace(); // camera relative to this tag
634
+ Pose3D robotPoseField = fr.getRobotPoseFieldSpace(); // robot in field space (from this tag)
635
+ Pose3D tagPoseCamera = fr.getTargetPoseCameraSpace(); // tag in camera coordinate system
636
+ Pose3D tagPoseRobot = fr.getTargetPoseRobotSpace(); // tag in robot coordinate system
637
+ }
638
+ \`\`\`
639
+
640
+ ### Color Blob Results
641
+ \`\`\`java
642
+ List<LLResultTypes.ColorResult> colors = result.getColorResults();
643
+ for (LLResultTypes.ColorResult cr : colors) {
644
+ double tx = cr.getTargetXDegrees(); // horizontal offset
645
+ double ty = cr.getTargetYDegrees(); // vertical offset
646
+ double area = cr.getTargetArea(); // blob area (0-100)
647
+ }
648
+ \`\`\`
649
+
650
+ ### Neural Network Detector Results
651
+ \`\`\`java
652
+ List<LLResultTypes.DetectorResult> detections = result.getDetectorResults();
653
+ for (LLResultTypes.DetectorResult dr : detections) {
654
+ String className = dr.getClassName(); // detected class name
655
+ double tx = dr.getTargetXDegrees();
656
+ double ty = dr.getTargetYDegrees();
657
+ double area = dr.getTargetArea();
658
+ }
659
+ \`\`\`
660
+
661
+ ### Neural Network Classifier Results
662
+ \`\`\`java
663
+ List<LLResultTypes.ClassifierResult> classResults = result.getClassifierResults();
664
+ for (LLResultTypes.ClassifierResult cr : classResults) {
665
+ String className = cr.getClassName(); // classified class name
666
+ double confidence = cr.getConfidence(); // confidence score (0-1)
667
+ }
668
+ \`\`\`
669
+
670
+ ### Barcode / QR Code Results
671
+ \`\`\`java
672
+ List<LLResultTypes.BarcodeResult> barcodes = result.getBarcodeResults();
673
+ for (LLResultTypes.BarcodeResult br : barcodes) {
674
+ String data = br.getData(); // barcode content string
675
+ String family = br.getFamily(); // barcode type (e.g., "QR")
676
+ }
677
+ \`\`\`
678
+
679
+ ### Status Monitoring
680
+ \`\`\`java
681
+ LLStatus status = limelight.getStatus();
682
+ String name = status.getName(); // device hostname
683
+ double temp = status.getTemp(); // CPU temperature (Celsius)
684
+ double cpu = status.getCpu(); // CPU usage (percentage)
685
+ double fps = status.getFps(); // current processing FPS
686
+ int pipelineIdx = status.getPipelineIndex(); // current pipeline index
687
+ String pipeType = status.getPipelineType(); // current pipeline type string
688
+ \`\`\`
689
+
690
+ ### Lifecycle
691
+ \`\`\`java
692
+ limelight.start(); // begin polling (REQUIRED)
693
+ limelight.stop(); // stop polling (call in stop() or end of OpMode)
694
+ \`\`\`
695
+
696
+ ### Complete Example
697
+ \`\`\`java
698
+ @TeleOp(name = "Limelight Demo", group = "Vision")
699
+ public class LimelightDemo extends LinearOpMode {
700
+ @Override
701
+ public void runOpMode() {
702
+ Limelight3A limelight = hardwareMap.get(Limelight3A.class, "limelight");
703
+ telemetry.setMsTransmissionInterval(11); // fast telemetry for Limelight
704
+ limelight.setPollRateHz(100);
705
+ limelight.pipelineSwitch(0);
706
+ limelight.start();
707
+
708
+ waitForStart();
709
+
710
+ while (opModeIsActive()) {
711
+ LLStatus status = limelight.getStatus();
712
+ telemetry.addData("LL", "Temp:%.1fC CPU:%.0f%% FPS:%d",
713
+ status.getTemp(), status.getCpu(), (int) status.getFps());
714
+
715
+ LLResult result = limelight.getLatestResult();
716
+ if (result != null && result.isValid()) {
717
+ telemetry.addData("tx/ty", "%.1f / %.1f", result.getTx(), result.getTy());
718
+ telemetry.addData("Latency", "%.0fms",
719
+ result.getCaptureLatency() + result.getTargetingLatency());
720
+ telemetry.addData("Staleness", "%.0fms", result.getStaleness());
721
+
722
+ for (LLResultTypes.FiducialResult fr : result.getFiducialResults()) {
723
+ telemetry.addData("Tag " + fr.getFiducialId(),
724
+ "tx:%.1f ty:%.1f area:%.1f",
725
+ fr.getTargetXDegrees(), fr.getTargetYDegrees(), fr.getTargetArea());
726
+ }
727
+ } else {
728
+ telemetry.addData("Result", "No valid targets");
729
+ }
730
+ telemetry.update();
731
+ }
732
+ limelight.stop();
733
+ }
734
+ }
735
+ \`\`\`
736
+ `,
737
+ megaTag: `
738
+ ## MegaTag — Robot Localization from AprilTags
739
+
740
+ The Limelight 3A offers two robot localization modes that compute your robot's position on the
741
+ field from visible AprilTags. These are the most powerful features of the Limelight for FTC.
742
+
743
+ ### Prerequisites (Both MegaTag Modes)
744
+ 1. Configure the **camera's position relative to robot center** in the Limelight web UI
745
+ (Input tab → Camera Pose). Measure X/Y/Z offset and rotation accurately.
746
+ 2. Upload or select the **field map** (the Limelight ships with the current FTC season's map)
747
+ 3. Enable **"Full 3D"** in the AprilTag pipeline's Advanced tab
748
+
749
+ ### MegaTag1 — Multi-Tag Fusion (No IMU Required)
750
+
751
+ MegaTag1 uses multiple simultaneously-visible AprilTags to compute a stable robot pose.
752
+ It does NOT require an external IMU.
753
+
754
+ **How it works:**
755
+ - When 2+ tags are visible, the pose is very stable and accurate
756
+ - With only 1 tag, there is risk of pose ambiguity (flip between two possible poses)
757
+ especially when the camera is head-on to the tag
758
+ - No external sensor fusion — purely vision-based
759
+
760
+ **Code:**
761
+ \`\`\`java
762
+ LLResult result = limelight.getLatestResult();
763
+ if (result != null && result.isValid()) {
764
+ Pose3D botpose = result.getBotpose(); // MegaTag1 result
765
+ if (botpose != null) {
766
+ double x = botpose.getPosition().x; // field X (meters)
767
+ double y = botpose.getPosition().y; // field Y (meters)
768
+ double z = botpose.getPosition().z; // field Z (meters)
769
+ double yaw = botpose.getOrientation().getYaw(); // heading (degrees)
770
+ double pitch = botpose.getOrientation().getPitch();
771
+ double roll = botpose.getOrientation().getRoll();
772
+ }
773
+ }
774
+ \`\`\`
775
+
776
+ **When to use MegaTag1:**
777
+ - Robot doesn't have a reliable IMU
778
+ - Multiple tags are usually visible (MT1 shines with 2+ tags)
779
+ - Backup when IMU is unreliable
780
+
781
+ ### MegaTag2 — IMU-Fused Localization (Recommended)
782
+
783
+ MegaTag2 fuses the robot's IMU heading with AprilTag detections for dramatically improved
784
+ accuracy. It eliminates the single-tag ambiguity problem entirely.
785
+
786
+ **How it works:**
787
+ - You provide the robot's current heading (yaw) from the IMU each frame
788
+ - The Limelight uses this heading to constrain the pose solution
789
+ - Even a single tag gives excellent localization (no ambiguity)
790
+ - Works reliably at any distance where the tag is detectable
791
+ - More accurate than MegaTag1 in almost all scenarios
792
+
793
+ **Code:**
794
+ \`\`\`java
795
+ IMU imu = hardwareMap.get(IMU.class, "imu");
796
+ // ... initialize IMU ...
797
+
798
+ // In loop — MUST call updateRobotOrientation() EVERY iteration BEFORE getting results:
799
+ double yaw = imu.getRobotYawPitchRollAngles().getYaw(AngleUnit.DEGREES);
800
+ limelight.updateRobotOrientation(yaw);
801
+
802
+ LLResult result = limelight.getLatestResult();
803
+ if (result != null && result.isValid()) {
804
+ Pose3D botpose = result.getBotpose_MT2(); // MegaTag2 result
805
+ if (botpose != null) {
806
+ double x = botpose.getPosition().x; // field X (meters)
807
+ double y = botpose.getPosition().y; // field Y (meters)
808
+ double heading = botpose.getOrientation().getYaw(); // field heading (degrees)
809
+
810
+ telemetry.addData("MT2 Pose", "X:%.2f Y:%.2f Heading:%.1f",
811
+ x, y, heading);
812
+ }
813
+ }
814
+ \`\`\`
815
+
816
+ **CRITICAL:** \`updateRobotOrientation()\` must be called every loop iteration BEFORE
817
+ \`getLatestResult()\`. If you skip it, MegaTag2 results will be inaccurate or unavailable.
818
+
819
+ **When to use MegaTag2:**
820
+ - Always, if you have a working IMU (which every FTC robot does via the Control Hub)
821
+ - Best-in-class FTC robot localization
822
+ - Works even with a single visible tag
823
+
824
+ ### MegaTag1 vs MegaTag2 Comparison
825
+
826
+ | Feature | MegaTag1 | MegaTag2 |
827
+ |---|---|---|
828
+ | IMU required | No | Yes (provide yaw each frame) |
829
+ | Single tag accuracy | Poor (ambiguity risk) | Excellent |
830
+ | Multi-tag accuracy | Excellent | Excellent |
831
+ | Heading accuracy | Vision-only | IMU-fused (more stable) |
832
+ | Setup complexity | Simpler | Slightly more code |
833
+ | Recommended | Backup option | Primary choice |
834
+
835
+ ### FTC Field Coordinate System (Limelight)
836
+ - Origin (0, 0, 0) = center of the field, on the floor
837
+ - Units: meters (NOT inches — convert if needed)
838
+ - 0° yaw: varies by field map configuration
839
+ - Configure in the Limelight web UI to match your desired convention
840
+
841
+ ### Integrating MegaTag with Pedro Pathing / Odometry
842
+ A common pattern is to use MegaTag2 for periodic position corrections on top of
843
+ continuous odometry tracking:
844
+
845
+ \`\`\`java
846
+ // In loop:
847
+ limelight.updateRobotOrientation(getCurrentHeading());
848
+ LLResult result = limelight.getLatestResult();
849
+
850
+ if (result != null && result.isValid()) {
851
+ Pose3D mt2 = result.getBotpose_MT2();
852
+ if (mt2 != null && result.getStaleness() < 100) {
853
+ double confidence = 1.0 / (1.0 + result.getStaleness() / 50.0);
854
+ // Only correct if confidence is high and data is fresh
855
+ if (confidence > 0.7) {
856
+ // Convert meters to inches (Pedro uses inches)
857
+ double xInches = mt2.getPosition().x * 39.3701;
858
+ double yInches = mt2.getPosition().y * 39.3701;
859
+ // Apply correction to your odometry / path follower
860
+ // follower.setPose(new Pose(xInches, yInches, heading));
861
+ }
862
+ }
863
+ }
864
+ \`\`\`
865
+
866
+ ### Python SnapScript Pipelines (Custom Limelight Vision)
867
+ The Limelight can run custom Python pipelines using OpenCV 4.10, numpy, and tensorflow:
868
+
869
+ \`\`\`python
870
+ # Written in the Limelight web UI's SnapScript editor
871
+ def runPipeline(image, llrobot):
872
+ # image: BGR numpy array from camera
873
+ # llrobot: double[8] from robot code (via updatePythonInputs())
874
+
875
+ import cv2
876
+ import numpy as np
877
+
878
+ # Example: detect yellow objects
879
+ hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
880
+ lower = np.array([20, 100, 100])
881
+ upper = np.array([30, 255, 255])
882
+ mask = cv2.inRange(hsv, lower, upper)
883
+
884
+ contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
885
+
886
+ largestContour = np.array([[]])
887
+ llpython = [0, 0, 0, 0, 0, 0, 0, 0]
888
+
889
+ if len(contours) > 0:
890
+ largestContour = max(contours, key=cv2.contourArea)
891
+ x, y, w, h = cv2.boundingRect(largestContour)
892
+ llpython[0] = x + w / 2 # center x
893
+ llpython[1] = y + h / 2 # center y
894
+ llpython[2] = w * h # area
895
+ cv2.rectangle(image, (x, y), (x + w, y + h), (0, 255, 0), 2)
896
+
897
+ # Returns: (contour for crosshair, annotated image, output array)
898
+ return largestContour, image, llpython
899
+ \`\`\`
900
+
901
+ **Robot-side code to communicate with SnapScript:**
902
+ \`\`\`java
903
+ // Send data TO Python pipeline
904
+ double[] inputs = {1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0};
905
+ limelight.updatePythonInputs(inputs);
906
+
907
+ // Read data FROM Python pipeline
908
+ double[] output = result.getPythonOutput();
909
+ double centerX = output[0];
910
+ double centerY = output[1];
911
+ double area = output[2];
912
+ \`\`\`
913
+
914
+ ### Snapshots (Offline Debugging)
915
+ \`\`\`java
916
+ limelight.captureSnapshot("auto_start"); // save current frame for later analysis
917
+ limelight.deleteSnapshots(); // clear all saved snapshots
918
+ \`\`\`
919
+ View snapshots in the Limelight web UI for debugging pipeline issues without the robot running.
920
+ `,
921
+ colorDetection: `
922
+ ## Color Detection — Custom VisionProcessor with OpenCV
923
+
924
+ For detecting game elements by color (samples, specimens, etc.), you write a custom
925
+ VisionProcessor that uses OpenCV for HSV thresholding and contour analysis. This runs
926
+ on the Control Hub CPU via VisionPortal.
927
+
928
+ ### How Custom VisionProcessors Work
929
+ \`\`\`
930
+ Camera Frame (RGB/YUV) → VisionPortal → Your VisionProcessor.processFrame() → Results
931
+ └→ Annotated frame for preview/Dashboard
932
+ \`\`\`
933
+
934
+ The \`VisionProcessor\` interface has three methods:
935
+ 1. \`init()\` — called once when the portal starts
936
+ 2. \`processFrame()\` — called for each camera frame, does the actual detection
937
+ 3. \`onDrawFrame()\` — called to draw annotations on the preview
938
+
939
+ ### Imports
940
+ \`\`\`java
941
+ import org.firstinspires.ftc.vision.VisionProcessor;
942
+ import org.opencv.core.Core;
943
+ import org.opencv.core.Mat;
944
+ import org.opencv.core.MatOfPoint;
945
+ import org.opencv.core.Point;
946
+ import org.opencv.core.Rect;
947
+ import org.opencv.core.Scalar;
948
+ import org.opencv.core.Size;
949
+ import org.opencv.imgproc.Imgproc;
950
+ import android.graphics.Canvas;
951
+ \`\`\`
952
+
953
+ ### Complete Color Detection Processor
954
+ \`\`\`java
955
+ public class SampleColorProcessor implements VisionProcessor {
956
+
957
+ // Detection result — volatile for thread safety (processFrame runs on a camera thread)
958
+ public volatile DetectionResult latestResult = new DetectionResult();
959
+
960
+ // HSV ranges for different sample colors
961
+ // Tune these using FTC Dashboard or the Limelight web UI as reference
962
+ // H: 0-179, S: 0-255, V: 0-255 in OpenCV
963
+ private Scalar lowerBound;
964
+ private Scalar upperBound;
965
+
966
+ // Reusable Mats (allocate once, reuse to avoid GC pressure)
967
+ private Mat hsv = new Mat();
968
+ private Mat mask = new Mat();
969
+ private Mat hierarchy = new Mat();
970
+
971
+ public enum SampleColor { RED, BLUE, YELLOW }
972
+
973
+ public static class DetectionResult {
974
+ public boolean detected = false;
975
+ public double centerX = 0; // pixels
976
+ public double centerY = 0; // pixels
977
+ public double width = 0; // pixels
978
+ public double height = 0; // pixels
979
+ public double area = 0; // pixels^2
980
+ public double aspectRatio = 0;
981
+ }
982
+
983
+ public SampleColorProcessor(SampleColor color) {
984
+ switch (color) {
985
+ case RED:
986
+ // Red wraps around the hue circle — handle both ends
987
+ // We'll handle this specially in processFrame
988
+ lowerBound = new Scalar(0, 120, 70);
989
+ upperBound = new Scalar(10, 255, 255);
990
+ break;
991
+ case BLUE:
992
+ lowerBound = new Scalar(100, 120, 70);
993
+ upperBound = new Scalar(130, 255, 255);
994
+ break;
995
+ case YELLOW:
996
+ lowerBound = new Scalar(18, 120, 100);
997
+ upperBound = new Scalar(32, 255, 255);
998
+ break;
999
+ }
1000
+ }
1001
+
1002
+ @Override
1003
+ public void init(int width, int height, CameraCalibration calibration) {
1004
+ // Called once when the portal starts streaming
1005
+ // Can use width/height to set up ROI or scaling
1006
+ }
1007
+
1008
+ @Override
1009
+ public Object processFrame(Mat frame, long captureTimeNanos) {
1010
+ // Convert BGR to HSV color space
1011
+ Imgproc.cvtColor(frame, hsv, Imgproc.COLOR_RGB2HSV);
1012
+
1013
+ // Threshold to create binary mask
1014
+ Core.inRange(hsv, lowerBound, upperBound, mask);
1015
+
1016
+ // For RED: also capture the high-hue range (170-179)
1017
+ if (lowerBound.val[0] < 15) {
1018
+ Mat mask2 = new Mat();
1019
+ Core.inRange(hsv, new Scalar(170, 120, 70), new Scalar(179, 255, 255), mask2);
1020
+ Core.bitwise_or(mask, mask2, mask);
1021
+ mask2.release();
1022
+ }
1023
+
1024
+ // Morphological operations to clean up noise
1025
+ Mat kernel = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(5, 5));
1026
+ Imgproc.morphologyEx(mask, mask, Imgproc.MORPH_OPEN, kernel); // remove small noise
1027
+ Imgproc.morphologyEx(mask, mask, Imgproc.MORPH_CLOSE, kernel); // fill small holes
1028
+ kernel.release();
1029
+
1030
+ // Find contours
1031
+ java.util.List<MatOfPoint> contours = new java.util.ArrayList<>();
1032
+ Imgproc.findContours(mask, contours, hierarchy, Imgproc.RETR_EXTERNAL,
1033
+ Imgproc.CHAIN_APPROX_SIMPLE);
1034
+
1035
+ // Find the largest contour
1036
+ DetectionResult result = new DetectionResult();
1037
+ double maxArea = 0;
1038
+ Rect bestRect = null;
1039
+
1040
+ for (MatOfPoint contour : contours) {
1041
+ double area = Imgproc.contourArea(contour);
1042
+ if (area > 500 && area > maxArea) { // minimum area filter
1043
+ maxArea = area;
1044
+ bestRect = Imgproc.boundingRect(contour);
1045
+ result.detected = true;
1046
+ result.area = area;
1047
+ }
1048
+ }
1049
+
1050
+ if (bestRect != null) {
1051
+ result.centerX = bestRect.x + bestRect.width / 2.0;
1052
+ result.centerY = bestRect.y + bestRect.height / 2.0;
1053
+ result.width = bestRect.width;
1054
+ result.height = bestRect.height;
1055
+ result.aspectRatio = (double) bestRect.width / bestRect.height;
1056
+
1057
+ // Draw bounding box on frame (visible in LiveView and Dashboard)
1058
+ Imgproc.rectangle(frame,
1059
+ new Point(bestRect.x, bestRect.y),
1060
+ new Point(bestRect.x + bestRect.width, bestRect.y + bestRect.height),
1061
+ new Scalar(0, 255, 0), 2); // green box, 2px thick
1062
+
1063
+ // Draw center crosshair
1064
+ Imgproc.circle(frame, new Point(result.centerX, result.centerY),
1065
+ 5, new Scalar(255, 0, 0), -1); // red dot
1066
+
1067
+ // Draw label
1068
+ Imgproc.putText(frame, String.format("Area: %.0f", result.area),
1069
+ new Point(bestRect.x, bestRect.y - 10),
1070
+ Imgproc.FONT_HERSHEY_SIMPLEX, 0.5, new Scalar(0, 255, 0), 1);
1071
+ }
1072
+
1073
+ // Release contour Mats
1074
+ for (MatOfPoint contour : contours) {
1075
+ contour.release();
1076
+ }
1077
+
1078
+ latestResult = result;
1079
+
1080
+ // Return value is passed to onDrawFrame — can be used for Canvas drawing
1081
+ return result;
1082
+ }
1083
+
1084
+ @Override
1085
+ public void onDrawFrame(Canvas canvas, int onscreenWidth, int onscreenHeight,
1086
+ float scaleBmpPxToCanvasPx, float scaleCanvasDensity,
1087
+ Object userContext) {
1088
+ // Optional: draw Android Canvas overlays on the LiveView
1089
+ // The OpenCV drawing in processFrame already handles most annotation needs
1090
+ }
1091
+ }
1092
+ \`\`\`
1093
+
1094
+ ### Using the Custom Processor with VisionPortal
1095
+ \`\`\`java
1096
+ @Autonomous(name = "Color Detection Auto", group = "Vision")
1097
+ public class ColorDetectionAuto extends LinearOpMode {
1098
+ @Override
1099
+ public void runOpMode() {
1100
+ SampleColorProcessor yellowDetector = new SampleColorProcessor(
1101
+ SampleColorProcessor.SampleColor.YELLOW);
1102
+
1103
+ VisionPortal portal = new VisionPortal.Builder()
1104
+ .setCamera(hardwareMap.get(WebcamName.class, "Webcam 1"))
1105
+ .addProcessor(yellowDetector)
1106
+ .setCameraResolution(new Size(640, 480))
1107
+ .setStreamFormat(VisionPortal.StreamFormat.MJPEG)
1108
+ .build();
1109
+
1110
+ // Set manual white balance for consistent color detection
1111
+ while (!isStopRequested() &&
1112
+ portal.getCameraState() != VisionPortal.CameraState.STREAMING) {
1113
+ sleep(20);
1114
+ }
1115
+ WhiteBalanceControl wb = portal.getCameraControl(WhiteBalanceControl.class);
1116
+ wb.setMode(WhiteBalanceControl.Mode.MANUAL);
1117
+ wb.setWhiteBalanceTemperature(4600);
1118
+
1119
+ // Detect during INIT
1120
+ while (opModeInInit()) {
1121
+ SampleColorProcessor.DetectionResult result = yellowDetector.latestResult;
1122
+ if (result.detected) {
1123
+ telemetry.addData("Yellow Sample", "Found at (%.0f, %.0f) area=%.0f",
1124
+ result.centerX, result.centerY, result.area);
1125
+ } else {
1126
+ telemetry.addData("Yellow Sample", "Not detected");
1127
+ }
1128
+ telemetry.update();
1129
+ }
1130
+
1131
+ // Use detection result in autonomous
1132
+ SampleColorProcessor.DetectionResult finalResult = yellowDetector.latestResult;
1133
+ if (finalResult.detected) {
1134
+ // Navigate to the detected sample...
1135
+ }
1136
+
1137
+ portal.close();
1138
+ }
1139
+ }
1140
+ \`\`\`
1141
+
1142
+ ### HSV Tuning Guide (OpenCV Convention)
1143
+
1144
+ OpenCV uses a different HSV range than most tools:
1145
+ - **H (Hue):** 0-179 (NOT 0-360). Divide typical hue values by 2.
1146
+ - **S (Saturation):** 0-255
1147
+ - **V (Value/Brightness):** 0-255
1148
+
1149
+ **Common FTC game element colors:**
1150
+
1151
+ | Color | H Low | H High | S Low | S High | V Low | V High | Notes |
1152
+ |---|---|---|---|---|---|---|---|
1153
+ | Red (low) | 0 | 10 | 120 | 255 | 70 | 255 | Red wraps — need both ranges |
1154
+ | Red (high) | 170 | 179 | 120 | 255 | 70 | 255 | Combine with OR |
1155
+ | Blue | 100 | 130 | 120 | 255 | 70 | 255 | |
1156
+ | Yellow | 18 | 32 | 120 | 255 | 100 | 255 | |
1157
+ | Green | 40 | 80 | 100 | 255 | 70 | 255 | |
1158
+ | Orange | 10 | 22 | 150 | 255 | 100 | 255 | |
1159
+ | White | 0 | 179 | 0 | 40 | 200 | 255 | Low saturation, high value |
1160
+
1161
+ **IMPORTANT:** These are starting points. ALWAYS tune HSV ranges at the competition venue under
1162
+ actual lighting conditions. Ranges that work in your shop may fail under competition LEDs.
1163
+
1164
+ ### HSV Tuning Tips
1165
+ 1. Use the Limelight web UI or a desktop HSV tool to find initial ranges
1166
+ 2. Make the ranges tunable via \`@Config public static\` fields:
1167
+ \`\`\`java
1168
+ @Config
1169
+ public class VisionConstants {
1170
+ public static double H_LOW = 18;
1171
+ public static double H_HIGH = 32;
1172
+ public static double S_LOW = 120;
1173
+ public static double S_HIGH = 255;
1174
+ public static double V_LOW = 100;
1175
+ public static double V_HIGH = 255;
1176
+ public static double MIN_AREA = 500;
1177
+ }
1178
+ \`\`\`
1179
+ 3. Tune live via FTC Dashboard at the competition venue
1180
+ 4. Set manual white balance to avoid color shifts between venues
1181
+
1182
+ ### Running Multiple Processors Simultaneously
1183
+ \`\`\`java
1184
+ // AprilTag + color detection on the same camera
1185
+ AprilTagProcessor aprilTag = new AprilTagProcessor.Builder().build();
1186
+ SampleColorProcessor colorProc = new SampleColorProcessor(SampleColorProcessor.SampleColor.YELLOW);
1187
+
1188
+ VisionPortal portal = new VisionPortal.Builder()
1189
+ .setCamera(hardwareMap.get(WebcamName.class, "Webcam 1"))
1190
+ .addProcessor(aprilTag)
1191
+ .addProcessor(colorProc)
1192
+ .setCameraResolution(new Size(640, 480))
1193
+ .build();
1194
+
1195
+ // Toggle processors as needed to save CPU
1196
+ portal.setProcessorEnabled(colorProc, false); // disable color during driving
1197
+ portal.setProcessorEnabled(aprilTag, true); // keep AprilTag active
1198
+ \`\`\`
1199
+
1200
+ ### OpenCV Drawing Functions Reference
1201
+ \`\`\`java
1202
+ // Rectangle
1203
+ Imgproc.rectangle(frame, new Point(x1, y1), new Point(x2, y2),
1204
+ new Scalar(0, 255, 0), 2); // color (BGR), thickness
1205
+
1206
+ // Circle
1207
+ Imgproc.circle(frame, new Point(cx, cy), radius,
1208
+ new Scalar(255, 0, 0), -1); // -1 = filled
1209
+
1210
+ // Line
1211
+ Imgproc.line(frame, new Point(x1, y1), new Point(x2, y2),
1212
+ new Scalar(0, 0, 255), 2);
1213
+
1214
+ // Text
1215
+ Imgproc.putText(frame, "Hello", new Point(x, y),
1216
+ Imgproc.FONT_HERSHEY_SIMPLEX, 0.7, new Scalar(255, 255, 255), 2);
1217
+ \`\`\`
1218
+ `,
1219
+ visionOptimization: `
1220
+ ## Vision Optimization — Maximum FPS, Minimum Loop Impact
1221
+
1222
+ Vision processing can be the biggest CPU consumer on the Control Hub. These optimization
1223
+ techniques are the difference between a 10 FPS stuttering pipeline and a smooth 30+ FPS system.
1224
+
1225
+ ### The Loop Time Impact Problem
1226
+
1227
+ Every frame processed by VisionPortal runs on the Control Hub CPU. If vision takes too long,
1228
+ it can starve your control loop of CPU time, causing jerky driving and slow responses.
1229
+
1230
+ **Baseline measurements (unoptimized, Logitech C920 at 640x480):**
1231
+ - AprilTag only: ~10-15 FPS (decimation=1), ~30 FPS (decimation=3)
1232
+ - Custom OpenCV color detection: ~15-25 FPS depending on complexity
1233
+ - AprilTag + color detection: ~8-12 FPS
1234
+
1235
+ **Optimized targets:**
1236
+ - AprilTag: 30 FPS (decimation=3)
1237
+ - Color detection: 25-30 FPS
1238
+ - Combined: 15-20 FPS
1239
+
1240
+ ### 1. Resolution Selection (Biggest Impact)
1241
+
1242
+ Lower resolution = fewer pixels to process = dramatically faster.
1243
+
1244
+ | Resolution | Pixels | Relative Speed | AprilTag Range | Color Detection |
1245
+ |---|---|---|---|---|
1246
+ | 320x240 | 76.8K | Fastest (~3x) | Short (2-4 feet) | Usable for close objects |
1247
+ | 640x480 | 307.2K | Baseline | Medium (5-10 feet) | Good for most FTC tasks |
1248
+ | 800x600 | 480K | Slower | Good | Better quality |
1249
+ | 1280x720 | 921.6K | Much slower (~3x) | Long (15+ feet) | Best quality |
1250
+ | 1920x1080 | 2.07M | Very slow (~7x) | Longest | Overkill for most FTC |
1251
+
1252
+ **Recommendation:** Use **640x480** for most FTC tasks. Only go higher for long-range
1253
+ AprilTag detection during init (when FPS doesn't matter). Drop to **320x240** if you
1254
+ need absolute maximum FPS for simple color detection.
1255
+
1256
+ ### 2. AprilTag Decimation (Most Important for AprilTag FPS)
1257
+
1258
+ Decimation downsamples the image before tag detection. Set it at runtime:
1259
+ \`\`\`java
1260
+ aprilTag.setDecimation(3); // 1 = full resolution, 2 = half, 3 = third
1261
+ \`\`\`
1262
+
1263
+ **Strategy — change decimation dynamically:**
1264
+ \`\`\`java
1265
+ // During INIT: max range, FPS doesn't matter
1266
+ aprilTag.setDecimation(1);
1267
+
1268
+ // During DRIVING: need 30 FPS, shorter range OK
1269
+ aprilTag.setDecimation(3);
1270
+
1271
+ // During ALIGNMENT: balance range and speed
1272
+ aprilTag.setDecimation(2);
1273
+ \`\`\`
1274
+
1275
+ Decimation does NOT affect pose accuracy — it only reduces detection range.
1276
+
1277
+ ### 3. Exposure Settings for Motion Blur
1278
+
1279
+ Low exposure is critical when the robot is moving. Motion blur destroys AprilTag detection.
1280
+
1281
+ \`\`\`java
1282
+ // BEFORE: auto exposure = 30-50ms = blurry tags while driving
1283
+ // AFTER: manual 5-6ms = sharp tags while driving
1284
+
1285
+ ExposureControl exposure = portal.getCameraControl(ExposureControl.class);
1286
+ exposure.setMode(ExposureControl.Mode.Manual);
1287
+ sleep(50);
1288
+ exposure.setExposure(6, TimeUnit.MILLISECONDS);
1289
+ GainControl gain = portal.getCameraControl(GainControl.class);
1290
+ gain.setGain(250); // compensate for dark image
1291
+ \`\`\`
1292
+
1293
+ **This alone can improve AprilTag detection rate from 30% to 95% while driving.**
1294
+
1295
+ ### 4. Disable LiveView in Competition
1296
+
1297
+ LiveView renders the camera preview on the Robot Controller screen. This costs CPU.
1298
+ In competition, no one is looking at the screen — disable it:
1299
+
1300
+ \`\`\`java
1301
+ // Option A: disable at build time
1302
+ .enableLiveView(false)
1303
+
1304
+ // Option B: disable at runtime
1305
+ portal.stopLiveView();
1306
+ \`\`\`
1307
+
1308
+ **Savings:** ~2-5 FPS improvement, plus reduced GPU usage.
1309
+
1310
+ ### 5. Processor Toggling (Enable Only When Needed)
1311
+
1312
+ Don't run processors you don't need right now:
1313
+
1314
+ \`\`\`java
1315
+ // During driving — only need AprilTag for alignment
1316
+ portal.setProcessorEnabled(colorProcessor, false);
1317
+ portal.setProcessorEnabled(aprilTag, true);
1318
+
1319
+ // During intake — only need color detection
1320
+ portal.setProcessorEnabled(aprilTag, false);
1321
+ portal.setProcessorEnabled(colorProcessor, true);
1322
+
1323
+ // Full stop — disable all vision processing
1324
+ portal.setProcessorEnabled(aprilTag, false);
1325
+ portal.setProcessorEnabled(colorProcessor, false);
1326
+ \`\`\`
1327
+
1328
+ This is the fastest toggle — no camera restart required.
1329
+
1330
+ ### 6. Stream Format Choice
1331
+
1332
+ \`\`\`java
1333
+ // YUY2 (default): uncompressed, higher quality, uses more USB bandwidth
1334
+ .setStreamFormat(VisionPortal.StreamFormat.YUY2)
1335
+
1336
+ // MJPEG: compressed, saves USB bandwidth, slight quality loss
1337
+ .setStreamFormat(VisionPortal.StreamFormat.MJPEG)
1338
+ \`\`\`
1339
+
1340
+ **Use MJPEG when:**
1341
+ - Running dual cameras on shared USB hub
1342
+ - Camera is on a USB 2.0 port with other devices
1343
+ - Streaming to FTC Dashboard (MJPEG is already compressed for network)
1344
+ - Resolution is 640x480 or higher
1345
+
1346
+ **Use YUY2 when:**
1347
+ - Single camera on dedicated USB port
1348
+ - Need best possible image quality
1349
+ - Low resolution (320x240 — bandwidth isn't an issue)
1350
+
1351
+ ### 7. Region of Interest (ROI) in Custom Processors
1352
+
1353
+ Don't process the entire frame if you know where the target will be:
1354
+
1355
+ \`\`\`java
1356
+ @Override
1357
+ public Object processFrame(Mat frame, long captureTimeNanos) {
1358
+ // Only process the bottom half of the frame (where samples are)
1359
+ int halfHeight = frame.rows() / 2;
1360
+ Mat roi = frame.submat(halfHeight, frame.rows(), 0, frame.cols());
1361
+
1362
+ Imgproc.cvtColor(roi, hsv, Imgproc.COLOR_RGB2HSV);
1363
+ Core.inRange(hsv, lowerBound, upperBound, mask);
1364
+
1365
+ // ... contour detection on smaller image ...
1366
+
1367
+ // IMPORTANT: adjust detected coordinates back to full-frame
1368
+ // Add halfHeight to any Y coordinates
1369
+ result.centerY += halfHeight;
1370
+
1371
+ roi.release(); // release the submat view
1372
+ return result;
1373
+ }
1374
+ \`\`\`
1375
+
1376
+ **Savings:** Processing half the pixels = roughly 2x faster pipeline.
1377
+
1378
+ ### 8. Minimize Object Allocation in processFrame()
1379
+
1380
+ The \`processFrame()\` method runs 30+ times per second. Avoid allocating new objects:
1381
+
1382
+ \`\`\`java
1383
+ // BAD — allocates new Mats every frame (GC pressure)
1384
+ public Object processFrame(Mat frame, long captureTimeNanos) {
1385
+ Mat hsv = new Mat(); // NEW every frame
1386
+ Mat mask = new Mat(); // NEW every frame
1387
+ // ...
1388
+ }
1389
+
1390
+ // GOOD — reuse pre-allocated Mats
1391
+ private Mat hsv = new Mat(); // allocated once
1392
+ private Mat mask = new Mat(); // allocated once
1393
+
1394
+ public Object processFrame(Mat frame, long captureTimeNanos) {
1395
+ Imgproc.cvtColor(frame, hsv, Imgproc.COLOR_RGB2HSV); // reuses hsv
1396
+ Core.inRange(hsv, lower, upper, mask); // reuses mask
1397
+ // ...
1398
+ }
1399
+ \`\`\`
1400
+
1401
+ Also avoid creating new \`Scalar\`, \`Point\`, \`Size\` objects in processFrame if possible.
1402
+ Pre-allocate them as instance fields.
1403
+
1404
+ ### 9. Reduce Morphological Operations
1405
+
1406
+ Each morphological operation (erode, dilate, open, close) adds processing time:
1407
+
1408
+ \`\`\`java
1409
+ // Expensive: 3 morphological operations
1410
+ Imgproc.erode(mask, mask, kernel);
1411
+ Imgproc.dilate(mask, mask, kernel);
1412
+ Imgproc.morphologyEx(mask, mask, Imgproc.MORPH_CLOSE, kernel);
1413
+
1414
+ // Cheaper: just one OPEN (removes noise) if that's sufficient
1415
+ Imgproc.morphologyEx(mask, mask, Imgproc.MORPH_OPEN, kernel);
1416
+ \`\`\`
1417
+
1418
+ Or skip morphological operations entirely if your HSV range is tight enough.
1419
+
1420
+ ### 10. Contour Filtering (Reject Early)
1421
+
1422
+ Filter contours by area immediately to avoid processing irrelevant detections:
1423
+
1424
+ \`\`\`java
1425
+ for (MatOfPoint contour : contours) {
1426
+ double area = Imgproc.contourArea(contour);
1427
+ if (area < 500) continue; // skip tiny noise contours immediately
1428
+
1429
+ Rect rect = Imgproc.boundingRect(contour);
1430
+ double aspect = (double) rect.width / rect.height;
1431
+ if (aspect < 0.3 || aspect > 3.0) continue; // skip unlikely aspect ratios
1432
+
1433
+ // Only expensive processing on likely candidates
1434
+ // ...
1435
+ }
1436
+ \`\`\`
1437
+
1438
+ ### 11. Limelight vs VisionPortal Performance
1439
+
1440
+ Since the Limelight runs on its own processor, it has **zero impact on Control Hub loop time**.
1441
+ This is the Limelight's biggest advantage for performance-sensitive teams.
1442
+
1443
+ \`\`\`
1444
+ VisionPortal (USB webcam): Limelight 3A:
1445
+ ┌─────────────────────┐ ┌──────────────────┐
1446
+ │ Control Hub CPU │ │ Limelight CPU │
1447
+ │ │ │ (processes │
1448
+ │ ┌─────────────────┐ │ │ frames here) │
1449
+ │ │ Your loop() │ │ └──────────────────┘
1450
+ │ │ + motor writes │ │ │
1451
+ │ │ + sensor reads │ │ USB (results only)
1452
+ │ │ + vision proc │◄── competes! │
1453
+ │ └─────────────────┘ │ ┌──────────────────┐
1454
+ │ │ │ Control Hub │
1455
+ └─────────────────────┘ │ │
1456
+ │ ┌──────────────┐ │
1457
+ │ │ Your loop() │ │
1458
+ │ │ + motor │ │
1459
+ │ │ + sensor │ │
1460
+ │ │ (no vision!) │ │
1461
+ │ └──────────────┘ │
1462
+ └──────────────────┘
1463
+ \`\`\`
1464
+
1465
+ ### 12. Telemetry Throttling for Vision
1466
+
1467
+ Telemetry transmission can bottleneck vision data flow:
1468
+
1469
+ \`\`\`java
1470
+ // Reduce telemetry transmission interval for faster vision data display
1471
+ telemetry.setMsTransmissionInterval(11); // 11ms instead of default 250ms
1472
+ \`\`\`
1473
+
1474
+ ### Optimization Checklist
1475
+
1476
+ | Optimization | Impact | Complexity | When to Apply |
1477
+ |---|---|---|---|
1478
+ | Set decimation=3 | High | Easy | Always for driving |
1479
+ | Lower resolution to 640x480 | High | Easy | Unless need long range |
1480
+ | Manual exposure 5-6ms | High | Easy | Always for AprilTags |
1481
+ | Disable LiveView | Medium | Easy | Competition |
1482
+ | Toggle unused processors | Medium | Easy | When switching tasks |
1483
+ | Use MJPEG format | Medium | Easy | Dual cameras or Dashboard |
1484
+ | ROI cropping | High | Moderate | When target position is predictable |
1485
+ | Pre-allocate Mats | Medium | Moderate | Custom processors |
1486
+ | Reduce morphological ops | Low-Med | Easy | Custom processors |
1487
+ | Use Limelight instead | Highest | Moderate | If budget allows |
1488
+ `,
1489
+ multiCamera: `
1490
+ ## Multi-Camera Setup (Dual Webcams)
1491
+
1492
+ FTC robots can run two USB webcams simultaneously using VisionPortal's MultiPortal feature.
1493
+ Common use case: front camera for AprilTags, rear camera for intake/color detection.
1494
+
1495
+ ### MultiPortal Setup
1496
+ \`\`\`java
1497
+ // Step 1: Create viewport IDs for split-screen preview
1498
+ int[] viewportIds = VisionPortal.makeMultiPortalView(2,
1499
+ VisionPortal.MultiPortalLayout.HORIZONTAL);
1500
+ // OR: VisionPortal.MultiPortalLayout.VERTICAL
1501
+
1502
+ // Step 2: Create separate processors for each camera
1503
+ AprilTagProcessor frontAprilTag = new AprilTagProcessor.Builder()
1504
+ .setDrawAxes(true)
1505
+ .build();
1506
+
1507
+ SampleColorProcessor rearColor = new SampleColorProcessor(
1508
+ SampleColorProcessor.SampleColor.YELLOW);
1509
+
1510
+ // Step 3: Build two separate portals
1511
+ VisionPortal frontPortal = new VisionPortal.Builder()
1512
+ .setCamera(hardwareMap.get(WebcamName.class, "Webcam Front"))
1513
+ .addProcessor(frontAprilTag)
1514
+ .setCameraResolution(new Size(640, 480))
1515
+ .setCameraMonitorViewId(viewportIds[0])
1516
+ .build();
1517
+
1518
+ VisionPortal rearPortal = new VisionPortal.Builder()
1519
+ .setCamera(hardwareMap.get(WebcamName.class, "Webcam Rear"))
1520
+ .addProcessor(rearColor)
1521
+ .setCameraResolution(new Size(320, 240)) // lower res for color = faster
1522
+ .setCameraMonitorViewId(viewportIds[1])
1523
+ .setStreamFormat(VisionPortal.StreamFormat.MJPEG) // save USB bandwidth
1524
+ .build();
1525
+ \`\`\`
1526
+
1527
+ ### USB Bandwidth Considerations
1528
+
1529
+ The Control Hub has two USB buses:
1530
+ - **USB 2.0 bus** — also carries the WiFi radio (shared bandwidth!)
1531
+ - **USB 3.0 bus** — separate, higher bandwidth
1532
+
1533
+ **Best practice:**
1534
+ - Put the primary camera on USB 3.0 (blue port)
1535
+ - Put the secondary camera on USB 2.0 (black port) with MJPEG + lower resolution
1536
+ - OR use a USB hub on the 3.0 port, but both cameras should use MJPEG
1537
+
1538
+ **If cameras share a USB hub:**
1539
+ - Use MJPEG format for both cameras
1540
+ - Reduce resolution on at least one camera
1541
+ - Total bandwidth must stay under USB 2.0 limits (~40 MB/s effective)
1542
+
1543
+ | Config | Resolution | Format | Bandwidth per camera |
1544
+ |---|---|---|---|
1545
+ | 640x480 YUY2 | 640x480 | YUY2 | ~18 MB/s at 30 FPS |
1546
+ | 640x480 MJPEG | 640x480 | MJPEG | ~3-5 MB/s at 30 FPS |
1547
+ | 320x240 MJPEG | 320x240 | MJPEG | ~1-2 MB/s at 30 FPS |
1548
+
1549
+ ### Lifecycle Management with Multiple Cameras
1550
+ \`\`\`java
1551
+ // Disable processors independently
1552
+ frontPortal.setProcessorEnabled(frontAprilTag, false); // save CPU
1553
+ rearPortal.setProcessorEnabled(rearColor, true); // keep active
1554
+
1555
+ // Close portals in stop()
1556
+ frontPortal.close();
1557
+ rearPortal.close();
1558
+ \`\`\`
1559
+
1560
+ ### Webcam + Limelight (Different Systems)
1561
+ You can run a USB webcam with VisionPortal AND a Limelight 3A simultaneously.
1562
+ They use separate USB ports and separate processing:
1563
+
1564
+ \`\`\`java
1565
+ // USB webcam for color detection (runs on Control Hub CPU)
1566
+ SampleColorProcessor colorProc = new SampleColorProcessor(SampleColorProcessor.SampleColor.YELLOW);
1567
+ VisionPortal webcamPortal = new VisionPortal.Builder()
1568
+ .setCamera(hardwareMap.get(WebcamName.class, "Webcam 1"))
1569
+ .addProcessor(colorProc)
1570
+ .setCameraResolution(new Size(640, 480))
1571
+ .build();
1572
+
1573
+ // Limelight for AprilTag localization (runs on Limelight's own CPU)
1574
+ Limelight3A limelight = hardwareMap.get(Limelight3A.class, "limelight");
1575
+ limelight.setPollRateHz(100);
1576
+ limelight.pipelineSwitch(0); // AprilTag pipeline
1577
+ limelight.start();
1578
+
1579
+ // In loop — both systems provide data independently
1580
+ SampleColorProcessor.DetectionResult color = colorProc.latestResult;
1581
+ LLResult llResult = limelight.getLatestResult();
1582
+ \`\`\`
1583
+ `,
1584
+ visionPatterns: `
1585
+ ## Common FTC Vision Patterns (Recipes)
1586
+
1587
+ ### 1. Init-Phase Detection (Randomized Game Element)
1588
+
1589
+ Many FTC games require detecting a randomized element position during the INIT phase
1590
+ (before the match starts). The detection result determines the autonomous path.
1591
+
1592
+ \`\`\`java
1593
+ @Autonomous(name = "Auto with Init Detection", group = "Competition")
1594
+ public class AutoWithInitDetection extends LinearOpMode {
1595
+
1596
+ private enum ElementPosition { LEFT, CENTER, RIGHT, UNKNOWN }
1597
+ private volatile ElementPosition detectedPosition = ElementPosition.UNKNOWN;
1598
+
1599
+ @Override
1600
+ public void runOpMode() {
1601
+ SampleColorProcessor detector = new SampleColorProcessor(
1602
+ SampleColorProcessor.SampleColor.RED);
1603
+
1604
+ VisionPortal portal = new VisionPortal.Builder()
1605
+ .setCamera(hardwareMap.get(WebcamName.class, "Webcam 1"))
1606
+ .addProcessor(detector)
1607
+ .setCameraResolution(new Size(640, 480))
1608
+ .build();
1609
+
1610
+ // Detect during the entire INIT phase
1611
+ while (opModeInInit()) {
1612
+ SampleColorProcessor.DetectionResult result = detector.latestResult;
1613
+
1614
+ if (result.detected) {
1615
+ // Determine position based on X coordinate
1616
+ double frameWidth = 640;
1617
+ if (result.centerX < frameWidth / 3.0) {
1618
+ detectedPosition = ElementPosition.LEFT;
1619
+ } else if (result.centerX < 2 * frameWidth / 3.0) {
1620
+ detectedPosition = ElementPosition.CENTER;
1621
+ } else {
1622
+ detectedPosition = ElementPosition.RIGHT;
1623
+ }
1624
+ }
1625
+
1626
+ telemetry.addData("Detected", detectedPosition);
1627
+ if (result.detected) {
1628
+ telemetry.addData("At", "(%.0f, %.0f)", result.centerX, result.centerY);
1629
+ }
1630
+ telemetry.update();
1631
+ }
1632
+
1633
+ // Match started — disable vision to save CPU for driving
1634
+ portal.setProcessorEnabled(detector, false);
1635
+
1636
+ // Execute autonomous based on detection
1637
+ switch (detectedPosition) {
1638
+ case LEFT:
1639
+ runLeftPath();
1640
+ break;
1641
+ case CENTER:
1642
+ runCenterPath();
1643
+ break;
1644
+ case RIGHT:
1645
+ runRightPath();
1646
+ break;
1647
+ default:
1648
+ runDefaultPath(); // fallback if detection failed
1649
+ break;
1650
+ }
1651
+
1652
+ portal.close();
1653
+ }
1654
+
1655
+ private void runLeftPath() { /* ... */ }
1656
+ private void runCenterPath() { /* ... */ }
1657
+ private void runRightPath() { /* ... */ }
1658
+ private void runDefaultPath(){ /* ... */ }
1659
+ }
1660
+ \`\`\`
1661
+
1662
+ ### 2. Drive-to-AprilTag Alignment (Mecanum)
1663
+
1664
+ Use AprilTag bearing and range for precise alignment with a scoring target:
1665
+
1666
+ \`\`\`java
1667
+ @Config
1668
+ public class AlignmentConstants {
1669
+ public static double DESIRED_RANGE = 12.0; // inches from tag
1670
+ public static double SPEED_GAIN = 0.02; // proportional gain for range
1671
+ public static double STRAFE_GAIN = 0.015; // proportional gain for lateral
1672
+ public static double TURN_GAIN = 0.01; // proportional gain for heading
1673
+ public static double MAX_DRIVE_SPEED = 0.5;
1674
+ public static double MAX_STRAFE = 0.5;
1675
+ public static double MAX_TURN = 0.3;
1676
+ public static int TARGET_TAG_ID = 1; // which tag to align to
1677
+ }
1678
+
1679
+ // In your OpMode loop:
1680
+ private void alignToAprilTag(AprilTagProcessor aprilTag) {
1681
+ List<AprilTagDetection> detections = aprilTag.getDetections();
1682
+
1683
+ AprilTagDetection targetTag = null;
1684
+ for (AprilTagDetection det : detections) {
1685
+ if (det.id == AlignmentConstants.TARGET_TAG_ID && det.metadata != null) {
1686
+ targetTag = det;
1687
+ break;
1688
+ }
1689
+ }
1690
+
1691
+ if (targetTag != null) {
1692
+ double rangeError = targetTag.ftcPose.range - AlignmentConstants.DESIRED_RANGE;
1693
+ double headingError = targetTag.ftcPose.bearing;
1694
+ double yawError = targetTag.ftcPose.yaw;
1695
+
1696
+ // Proportional control
1697
+ double drive = Range.clip(rangeError * AlignmentConstants.SPEED_GAIN,
1698
+ -AlignmentConstants.MAX_DRIVE_SPEED, AlignmentConstants.MAX_DRIVE_SPEED);
1699
+ double strafe = Range.clip(-yawError * AlignmentConstants.STRAFE_GAIN,
1700
+ -AlignmentConstants.MAX_STRAFE, AlignmentConstants.MAX_STRAFE);
1701
+ double turn = Range.clip(headingError * AlignmentConstants.TURN_GAIN,
1702
+ -AlignmentConstants.MAX_TURN, AlignmentConstants.MAX_TURN);
1703
+
1704
+ // Apply to mecanum drive
1705
+ setMecanumPowers(drive, strafe, turn);
1706
+
1707
+ telemetry.addData("Tag", "Range:%.1f Bearing:%.1f Yaw:%.1f",
1708
+ targetTag.ftcPose.range, targetTag.ftcPose.bearing, targetTag.ftcPose.yaw);
1709
+ } else {
1710
+ // No tag visible — stop or use last known position
1711
+ setMecanumPowers(0, 0, 0);
1712
+ telemetry.addData("Tag", "Not visible");
1713
+ }
1714
+ }
1715
+
1716
+ private void setMecanumPowers(double drive, double strafe, double turn) {
1717
+ double fl = drive + strafe + turn;
1718
+ double fr = drive - strafe - turn;
1719
+ double bl = drive - strafe + turn;
1720
+ double br = drive + strafe - turn;
1721
+ double max = Math.max(1.0, Math.max(Math.max(Math.abs(fl), Math.abs(fr)),
1722
+ Math.max(Math.abs(bl), Math.abs(br))));
1723
+ frontLeft.setPower(fl / max);
1724
+ frontRight.setPower(fr / max);
1725
+ backLeft.setPower(bl / max);
1726
+ backRight.setPower(br / max);
1727
+ }
1728
+ \`\`\`
1729
+
1730
+ ### 3. Field Localization from AprilTags (VisionPortal)
1731
+
1732
+ Convert AprilTag detections to robot field position using known tag positions:
1733
+
1734
+ \`\`\`java
1735
+ /**
1736
+ * Compute robot's field position from an AprilTag detection.
1737
+ * Requires the tag's field position to be set in the tag library.
1738
+ *
1739
+ * This is a simplified 2D approach. For full 3D, use the raw pose matrix.
1740
+ */
1741
+ public Pose2D getRobotFieldPose(AprilTagDetection detection, double robotHeading) {
1742
+ if (detection.metadata == null || detection.ftcPose == null) return null;
1743
+
1744
+ // Camera-relative position of tag
1745
+ double tagRange = detection.ftcPose.range; // inches
1746
+ double tagBearing = Math.toRadians(detection.ftcPose.bearing);
1747
+
1748
+ // Tag's known field position (from tag library metadata)
1749
+ // You need to set these when building your tag library
1750
+ double tagFieldX = detection.metadata.fieldPosition.get(0); // inches
1751
+ double tagFieldY = detection.metadata.fieldPosition.get(1); // inches
1752
+
1753
+ // Robot heading (from IMU)
1754
+ double headingRad = Math.toRadians(robotHeading);
1755
+
1756
+ // Camera-to-tag vector in field frame
1757
+ double dx = tagRange * Math.sin(headingRad + tagBearing);
1758
+ double dy = tagRange * Math.cos(headingRad + tagBearing);
1759
+
1760
+ // Robot position = tag position - camera-to-tag vector
1761
+ // (also account for camera offset from robot center if significant)
1762
+ double robotX = tagFieldX - dx;
1763
+ double robotY = tagFieldY - dy;
1764
+
1765
+ return new Pose2D(robotX, robotY, robotHeading);
1766
+ }
1767
+ \`\`\`
1768
+
1769
+ **Note:** For production localization, MegaTag2 on the Limelight is much more accurate and
1770
+ easier to implement than manual localization math with VisionPortal.
1771
+
1772
+ ### 4. Vision + Path Following (Pedro/Road Runner)
1773
+
1774
+ Combine vision detection during auto with path following:
1775
+
1776
+ \`\`\`java
1777
+ // Pattern: detect during init, build path based on detection, execute path
1778
+
1779
+ // In init:
1780
+ while (opModeInInit()) {
1781
+ // Vision processor runs, updates latestResult
1782
+ }
1783
+
1784
+ // After start:
1785
+ SampleColorProcessor.DetectionResult detection = colorProc.latestResult;
1786
+
1787
+ // Disable vision to free CPU for path following
1788
+ portal.setProcessorEnabled(colorProc, false);
1789
+
1790
+ // Build path based on detection
1791
+ PathChain scorePath;
1792
+ if (detection.detected && detection.centerX < 320) {
1793
+ scorePath = buildLeftSidePath();
1794
+ } else {
1795
+ scorePath = buildRightSidePath();
1796
+ }
1797
+
1798
+ follower.followPath(scorePath);
1799
+ while (opModeIsActive() && follower.isBusy()) {
1800
+ follower.update();
1801
+ }
1802
+ \`\`\`
1803
+
1804
+ ### 5. Dashboard Camera Streaming
1805
+
1806
+ Stream the camera feed to FTC Dashboard for remote viewing and debugging:
1807
+
1808
+ \`\`\`java
1809
+ import com.acmerobotics.dashboard.FtcDashboard;
1810
+
1811
+ // After building the portal:
1812
+ FtcDashboard.getInstance().startCameraStream(portal, 0);
1813
+ // The second parameter (0) = max FPS for streaming (0 = no limit, limited by Dashboard)
1814
+
1815
+ // To stop streaming:
1816
+ FtcDashboard.getInstance().stopCameraStream();
1817
+ \`\`\`
1818
+
1819
+ View the stream at \`http://192.168.43.1:8080/dash\` → Camera tab.
1820
+
1821
+ ### 6. Latency Compensation
1822
+
1823
+ For fast-moving robots, account for the delay between frame capture and when you act on it:
1824
+
1825
+ \`\`\`java
1826
+ // VisionPortal: capture time is available in processFrame
1827
+ @Override
1828
+ public Object processFrame(Mat frame, long captureTimeNanos) {
1829
+ // captureTimeNanos = when the frame was captured (System.nanoTime() scale)
1830
+ // Current time - captureTimeNanos = total pipeline latency
1831
+ long latencyMs = (System.nanoTime() - captureTimeNanos) / 1_000_000;
1832
+ // Use this to compensate: where was the robot when this frame was taken?
1833
+ }
1834
+
1835
+ // Limelight: latency is reported directly
1836
+ LLResult result = limelight.getLatestResult();
1837
+ double totalLatency = result.getCaptureLatency() + result.getTargetingLatency();
1838
+ double staleness = result.getStaleness();
1839
+ // If staleness > 100ms, the data is likely too old to act on
1840
+ \`\`\`
1841
+
1842
+ ### 7. Graceful Vision Failure Handling
1843
+
1844
+ Always have a fallback when vision fails:
1845
+
1846
+ \`\`\`java
1847
+ // Don't hang waiting for a detection — use a timeout
1848
+ ElapsedTime visionTimer = new ElapsedTime();
1849
+ ElementPosition position = ElementPosition.UNKNOWN;
1850
+
1851
+ while (opModeInInit()) {
1852
+ if (detector.latestResult.detected) {
1853
+ position = classifyPosition(detector.latestResult);
1854
+ visionTimer.reset(); // reset timeout since we got a detection
1855
+ }
1856
+ // Show detection state to driver
1857
+ telemetry.addData("Detection", position);
1858
+ telemetry.addData("Last seen", "%.1f sec ago", visionTimer.seconds());
1859
+ telemetry.update();
1860
+ }
1861
+
1862
+ // If no detection in the last 2 seconds of init, use default
1863
+ if (visionTimer.seconds() > 2.0) {
1864
+ position = ElementPosition.CENTER; // safe default
1865
+ telemetry.addData("WARNING", "Using default position — vision timed out");
1866
+ }
1867
+ \`\`\`
1868
+ `
1869
+ };