react-native-rectangle-doc-scanner 0.66.0 → 0.69.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,80 +1,66 @@
1
1
  # React Native Rectangle Doc Scanner
2
2
 
3
- VisionCamera + Fast-OpenCV powered document scanner template built for React Native. You can install it as a reusable module, extend the detection pipeline, and publish to npm out of the box.
3
+ > ⚠️ **Native module migration in progress**
4
+ >
5
+ > A native VisionKit (iOS) + CameraX/ML Kit (Android) implementation is being scaffolded to replace the previous VisionCamera/OpenCV pipeline. The JavaScript API is already aligned with the native contract; the detection/capture engines will be filled in next. See [`docs/native-module-architecture.md`](docs/native-module-architecture.md) for the roadmap.
6
+
7
+ Native-ready document scanner for React Native that keeps your overlay completely customisable. The library renders a native camera preview, streams polygon detections back to JavaScript, and exposes an imperative `capture()` method so you can build the exact UX you need.
4
8
 
5
9
  ## Features
6
- - Real-time quad detection using `react-native-fast-opencv`
7
- - Frame processor worklet executed on the UI thread via `react-native-vision-camera`
8
- - High-resolution processing (1280p) for accurate corner detection
9
- - Advanced anchor locking system maintains corner positions during camera movement
10
- - Intelligent edge detection with optimized Canny parameters (50/150 thresholds)
11
- - Adaptive smoothing with weighted averaging across multiple frames
12
- - Resize plugin keeps frame processing fast on lower-end devices
13
- - Skia overlay visualises detected document contours
14
- - Stability tracker enables auto-capture once the document is steady
10
+
11
+ - Native camera preview surfaces on iOS/Android with React overlay support
12
+ - Polygon detection events (with stability counter) delivered every frame
13
+ - Skia-powered outline + optional 3×3 grid overlay
14
+ - Auto-capture and manual capture flows using the same API
15
+ - Optional `CropEditor` powered by `react-native-perspective-image-cropper`
15
16
 
16
17
  ## Requirements
17
- Install the module alongside these peer dependencies (your host app should already include them or install them now):
18
18
 
19
- - `react-native-vision-camera` (v3+) with frame processors enabled
20
- - `vision-camera-resize-plugin`
21
- - `react-native-fast-opencv`
22
- - `react-native-perspective-image-cropper`
23
- - `react-native-reanimated` + `react-native-worklets-core`
24
- - `@shopify/react-native-skia`
25
- - `react`, `react-native`
19
+ - React Native 0.70+
20
+ - iOS 13+ (VisionKit availability) / Android 7.0+ (API 24)
21
+ - Camera permission strings in your host app (`NSCameraUsageDescription`, Android runtime permission handling)
22
+ - Peer dependencies:
23
+ - `@shopify/react-native-skia`
24
+ - `react-native-perspective-image-cropper`
25
+ - `react`
26
+ - `react-native`
26
27
 
27
28
  ## Installation
28
29
 
29
30
  ```sh
30
31
  yarn add react-native-rectangle-doc-scanner \
31
- react-native-vision-camera \
32
- vision-camera-resize-plugin \
33
- react-native-fast-opencv \
34
- react-native-perspective-image-cropper \
35
- react-native-reanimated \
36
- react-native-worklets-core \
37
- @shopify/react-native-skia
38
- ```
39
-
40
- Follow each dependency’s native installation guide:
32
+ @shopify/react-native-skia \
33
+ react-native-perspective-image-cropper
41
34
 
42
- - Run `npx pod-install` after adding iOS dependencies.
43
- - For `react-native-reanimated`, add the Babel plugin, enable the JSI runtime, and ensure Reanimated is the first import in `index.js`.
44
- - Configure `react-native-fast-opencv` according to its README (adds native OpenCV binaries on both platforms).
45
- - For `react-native-vision-camera`, enable frame processors by adding the new architecture build and the proxy registration they describe. You must also request camera permissions at runtime.
46
- - Register the resize plugin once in native code (for example inside your `VisionCameraProxy` setup):
47
-
48
- ```ts
49
- import { VisionCameraProxy } from 'react-native-vision-camera';
50
- import { ResizePlugin } from 'vision-camera-resize-plugin';
35
+ # iOS
36
+ cd ios && pod install
37
+ ```
51
38
 
52
- VisionCameraProxy.installFrameProcessorPlugin('resize', ResizePlugin);
53
- ```
39
+ Android will automatically pick up the included Gradle configuration. If you use a custom package list (old architecture), register `new RNRDocScannerPackage()` manually.
54
40
 
55
41
  ## Usage
56
42
 
57
- ### Basic Document Scanning
58
-
59
43
  ```tsx
60
44
  import React, { useState } from 'react';
61
- import { StyleSheet, Text, TouchableOpacity, View } from 'react-native';
62
- import { DocScanner, CropEditor, type CapturedDocument } from 'react-native-rectangle-doc-scanner';
63
-
64
- export const ScanScreen = () => {
45
+ import { StyleSheet, Text, View } from 'react-native';
46
+ import {
47
+ DocScanner,
48
+ CropEditor,
49
+ type CapturedDocument,
50
+ } from 'react-native-rectangle-doc-scanner';
51
+
52
+ export const ScanScreen: React.FC = () => {
65
53
  const [capturedDoc, setCapturedDoc] = useState<CapturedDocument | null>(null);
66
54
 
67
55
  if (capturedDoc) {
68
- // Show crop editor after capture
69
56
  return (
70
57
  <CropEditor
71
58
  document={capturedDoc}
72
- overlayColor="rgba(0,0,0,0.5)"
59
+ overlayColor="rgba(0,0,0,0.6)"
73
60
  overlayStrokeColor="#e7a649"
74
61
  handlerColor="#e7a649"
75
62
  onCropChange={(rectangle) => {
76
- console.log('User adjusted corners:', rectangle);
77
- // Process the adjusted corners
63
+ console.log('Adjusted corners:', rectangle);
78
64
  }}
79
65
  />
80
66
  );
@@ -83,17 +69,16 @@ export const ScanScreen = () => {
83
69
  return (
84
70
  <View style={styles.container}>
85
71
  <DocScanner
72
+ overlayColor="#e7a649"
73
+ minStableFrames={8}
74
+ autoCapture
86
75
  onCapture={(doc) => {
87
- console.log('Document captured:', doc);
76
+ console.log('Captured document:', doc);
88
77
  setCapturedDoc(doc);
89
78
  }}
90
- overlayColor="#e7a649"
91
- autoCapture
92
- minStableFrames={8}
93
- cameraProps={{ enableZoomGesture: true }}
94
79
  >
95
- <View style={styles.overlayControls}>
96
- <Text style={styles.hint}>Position document in frame</Text>
80
+ <View style={styles.overlay}>
81
+ <Text style={styles.hint}>Align the document with the frame</Text>
97
82
  </View>
98
83
  </DocScanner>
99
84
  </View>
@@ -101,141 +86,85 @@ export const ScanScreen = () => {
101
86
  };
102
87
 
103
88
  const styles = StyleSheet.create({
104
- container: { flex: 1 },
105
- overlayControls: {
89
+ container: { flex: 1, backgroundColor: '#000' },
90
+ overlay: {
106
91
  position: 'absolute',
107
92
  top: 60,
108
93
  alignSelf: 'center',
94
+ paddingHorizontal: 20,
95
+ paddingVertical: 8,
96
+ borderRadius: 12,
97
+ backgroundColor: 'rgba(0,0,0,0.55)',
109
98
  },
110
- hint: {
111
- color: '#fff',
112
- fontSize: 16,
113
- fontWeight: '600',
114
- textShadowColor: 'rgba(0,0,0,0.75)',
115
- textShadowOffset: { width: 0, height: 1 },
116
- textShadowRadius: 3,
117
- },
99
+ hint: { color: '#fff', fontSize: 15, fontWeight: '600' },
118
100
  });
119
101
  ```
120
102
 
121
- ### Advanced Configuration
103
+ The native view renders underneath; anything you pass as `children` sits on top, so you can add custom buttons, headers, progress indicators, etc.
122
104
 
123
- ```tsx
124
- import { DocScanner, type DetectionConfig } from 'react-native-rectangle-doc-scanner';
125
-
126
- const detectionConfig: DetectionConfig = {
127
- processingWidth: 1280, // Higher = more accurate but slower
128
- cannyLowThreshold: 40, // Lower = detect more edges
129
- cannyHighThreshold: 120, // Edge strength threshold
130
- snapDistance: 8, // Corner lock sensitivity
131
- maxAnchorMisses: 20, // Frames to hold anchor when detection fails
132
- maxCenterDelta: 200, // Max camera movement while maintaining lock
133
- };
105
+ ## API
134
106
 
135
- <DocScanner
136
- detectionConfig={detectionConfig}
137
- onCapture={(doc) => {
138
- // doc includes: path, quad, width, height
139
- console.log('Captured with size:', doc.width, 'x', doc.height);
140
- }}
141
- />
142
- ```
107
+ ### `<DocScanner />`
143
108
 
144
- Passing `children` lets you render any UI on top of the camera preview, so you can freely add buttons, tutorials, or progress indicators without modifying the package.
109
+ | Prop | Type | Default | Description |
110
+ | --- | --- | --- | --- |
111
+ | `onCapture` | `(result) => void` | – | Fired when a capture resolves. Returns `path`, `width`, `height`, `quad`. |
112
+ | `overlayColor` | `string` | `#e7a649` | Stroke colour for the overlay outline. |
113
+ | `autoCapture` | `boolean` | `true` | When `true`, capture is triggered automatically once stability is reached. |
114
+ | `minStableFrames` | `number` | `8` | Number of stable frames before auto capture fires. |
115
+ | `enableTorch` | `boolean` | `false` | Toggles device torch (if supported). |
116
+ | `quality` | `number` | `90` | JPEG quality (0–100). |
117
+ | `useBase64` | `boolean` | `false` | Return base64 strings instead of file URIs. |
118
+ | `showGrid` | `boolean` | `true` | Show the 3×3 helper grid inside the overlay. |
119
+ | `gridColor` | `string` | `rgba(231,166,73,0.35)` | Colour of grid lines. |
120
+ | `gridLineWidth` | `number` | `2` | Width of grid lines. |
145
121
 
146
- ## API Reference
122
+ Imperative helpers are exposed via `DocScannerHandle`:
147
123
 
148
- ### DocScanner Props
124
+ ```ts
125
+ import { DocScanner, type DocScannerHandle } from 'react-native-rectangle-doc-scanner';
149
126
 
150
- - `onCapture({ path, quad, width, height })` — called when a photo is taken
151
- - `path`: file path to the captured image
152
- - `quad`: detected corner coordinates (or `null` if none found)
153
- - `width`, `height`: original frame dimensions for coordinate scaling
154
- - `overlayColor` (default `#e7a649`) — stroke color for the contour overlay
155
- - `autoCapture` (default `true`) — auto-captures after stability is reached
156
- - `minStableFrames` (default `8`) — consecutive stable frames required before auto capture
157
- - `cameraProps` — forwarded to underlying `Camera` (zoom, HDR, torch, etc.)
158
- - `children` — custom UI rendered over the camera preview
159
- - `detectionConfig` — advanced detection configuration (see below)
127
+ const ref = useRef<DocScannerHandle>(null);
160
128
 
161
- ### DetectionConfig
129
+ const fireCapture = () => {
130
+ ref.current?.capture().then((result) => {
131
+ console.log(result);
132
+ });
133
+ };
134
+ ```
162
135
 
163
- Fine-tune the detection algorithm for your specific use case:
136
+ > The native module currently returns a `"not_implemented"` error from `capture()` until the VisionKit / ML Kit integration is finished. The JS surface is ready for when the native pipeline lands.
164
137
 
165
- ```typescript
166
- interface DetectionConfig {
167
- processingWidth?: number; // Default: 1280 (higher = more accurate but slower)
168
- cannyLowThreshold?: number; // Default: 40 (lower = detect more edges)
169
- cannyHighThreshold?: number; // Default: 120 (edge strength threshold)
170
- snapDistance?: number; // Default: 8 (corner lock sensitivity in pixels)
171
- maxAnchorMisses?: number; // Default: 20 (frames to hold anchor when detection fails)
172
- maxCenterDelta?: number; // Default: 200 (max camera movement while maintaining lock)
173
- }
174
- ```
138
+ ### `<CropEditor />`
175
139
 
176
- ### CropEditor Props
177
-
178
- - `document` `CapturedDocument` object from `onCapture` callback
179
- - `overlayColor` (default `rgba(0,0,0,0.5)`) color of overlay outside crop area
180
- - `overlayStrokeColor` (default `#e7a649`) color of crop boundary lines
181
- - `handlerColor` (default `#e7a649`) color of corner drag handles
182
- - `enablePanStrict` (default `false`) enable strict panning behavior
183
- - `onCropChange(rectangle)` callback when user adjusts corners
184
-
185
- ### Notes on camera behaviour
186
-
187
- - If you disable `autoCapture`, the built-in shutter button appears; you can still provide your own UI as `children` to replace or augment it.
188
- - The internal frame processor handles document detection; do not override `frameProcessor` in `cameraProps`.
189
- - Adjust `minStableFrames` or tweak lighting conditions if auto capture is too sensitive or too slow.
190
-
191
- ## Detection Algorithm
192
-
193
- The scanner uses a sophisticated multi-stage pipeline optimized for quality and stability:
194
-
195
- ### 1. Pre-processing (Configurable Resolution)
196
- - Resizes frame to `processingWidth` (default 1280p) for optimal accuracy
197
- - Converts to grayscale
198
- - **Enhanced morphological operations**:
199
- - MORPH_CLOSE to fill small holes in edges (7x7 kernel)
200
- - MORPH_OPEN to remove small noise
201
- - **Bilateral filter** for edge-preserving smoothing (better than Gaussian)
202
- - **Adaptive Canny edge detection** with configurable thresholds (default 40/120)
203
-
204
- ### 2. Contour Detection
205
- - Finds external contours using CHAIN_APPROX_SIMPLE
206
- - Applies convex hull for improved corner accuracy
207
- - Tests **23 epsilon values** (0.1%-10%) for approxPolyDP to find exact 4 corners
208
- - Validates quadrilaterals for convexity and valid coordinates
209
-
210
- ### 3. Advanced Anchor Locking System
211
- Once corners are detected, the system maintains stability through:
212
- - **Snap locking**: Corners lock to positions when movement is minimal
213
- - **Camera movement tolerance**: Maintains lock during movement (up to 200px center delta)
214
- - **Persistence**: Holds anchor for up to 20 consecutive failed detections
215
- - **Adaptive blending**: Smoothly transitions between old and new positions
216
- - **Confidence building**: Increases lock strength over time (max 30 frames)
217
- - **Intelligent reset**: Only resets when document clearly changes
218
-
219
- ### 4. Quad Validation
220
- - Area ratio filtering (0.02%-90% of frame)
221
- - Minimum edge length validation
222
- - Aspect ratio constraints (max 7:1)
223
- - Convexity checks to filter invalid shapes
224
-
225
- ### 5. Post-Capture Editing
226
- After capture, users can manually adjust corners using the `CropEditor` component:
227
- - Grid-based interface with perspective view
228
- - Draggable corner handles
229
- - Real-time preview of adjusted crop area
230
- - Exports adjusted coordinates for final processing
231
-
232
- This multi-layered approach ensures high-quality detection with maximum flexibility for various document types and lighting conditions.
140
+ | Prop | Type | Default | Description |
141
+ | --- | --- | --- | --- |
142
+ | `document` | `CapturedDocument` | | Document produced by `onCapture`. |
143
+ | `overlayColor` | `string` | `rgba(0,0,0,0.5)` | Tint outside the crop area. |
144
+ | `overlayStrokeColor` | `string` | `#e7a649` | Boundary colour. |
145
+ | `handlerColor` | `string` | `#e7a649` | Corner handle colour. |
146
+ | `enablePanStrict` | `boolean` | `false` | Enable strict panning behaviour. |
147
+ | `onCropChange` | `(rectangle) => void` | | Fires when the user drags handles. |
148
+
149
+ ## Native scaffolding status
150
+
151
+ - TypeScript wrapper + overlay grid
152
+ - iOS view manager / module skeleton (Swift)
153
+ - Android view manager / module skeleton (Kotlin)
154
+ - ☐ VisionKit rectangle detection & capture pipeline
155
+ - CameraX + ML Kit rectangle detection & capture pipeline
156
+ - ☐ Base64 / file output parity tests
157
+
158
+ Contributions to the native pipeline are welcome! Start by reading [`docs/native-module-architecture.md`](docs/native-module-architecture.md) for the current plan.
233
159
 
234
160
  ## Build
161
+
235
162
  ```sh
236
163
  yarn build
237
164
  ```
165
+
238
166
  Generates the `dist/` output via TypeScript.
239
167
 
240
168
  ## License
169
+
241
170
  MIT
@@ -0,0 +1,55 @@
1
+ buildscript {
2
+ repositories {
3
+ google()
4
+ mavenCentral()
5
+ }
6
+ dependencies {
7
+ classpath("com.android.tools.build:gradle:8.1.1")
8
+ classpath("org.jetbrains.kotlin:kotlin-gradle-plugin:1.9.10")
9
+ }
10
+ }
11
+
12
+ apply plugin: "com.android.library"
13
+ apply plugin: "org.jetbrains.kotlin.android"
14
+
15
+ android {
16
+ namespace "com.reactnativerectangledocscanner"
17
+ compileSdkVersion 34
18
+
19
+ defaultConfig {
20
+ minSdkVersion 24
21
+ targetSdkVersion 34
22
+ consumerProguardFiles "consumer-rules.pro"
23
+ }
24
+
25
+ buildTypes {
26
+ release {
27
+ minifyEnabled false
28
+ proguardFiles getDefaultProguardFile("proguard-android-optimize.txt"), "proguard-rules.pro"
29
+ }
30
+ }
31
+
32
+ compileOptions {
33
+ sourceCompatibility JavaVersion.VERSION_11
34
+ targetCompatibility JavaVersion.VERSION_11
35
+ }
36
+
37
+ kotlinOptions {
38
+ jvmTarget = "11"
39
+ }
40
+ }
41
+
42
+ repositories {
43
+ google()
44
+ mavenCentral()
45
+ }
46
+
47
+ dependencies {
48
+ implementation "com.facebook.react:react-native:+"
49
+ implementation "androidx.camera:camera-core:1.3.1"
50
+ implementation "androidx.camera:camera-camera2:1.3.1"
51
+ implementation "androidx.camera:camera-lifecycle:1.3.1"
52
+ implementation "androidx.camera:camera-view:1.3.1"
53
+ implementation "com.google.mlkit:document-scanner:16.0.0-beta3"
54
+ implementation "org.jetbrains.kotlinx:kotlinx-coroutines-android:1.7.3"
55
+ }
@@ -0,0 +1 @@
1
+ # Consumer ProGuard rules will be added once native implementation is complete.
@@ -0,0 +1 @@
1
+ # Keep default empty; add rules when native implementation is added.
@@ -0,0 +1,11 @@
1
+ <?xml version="1.0" encoding="utf-8"?>
2
+ <manifest xmlns:android="http://schemas.android.com/apk/res/android"
3
+ package="com.reactnativerectangledocscanner">
4
+
5
+ <uses-permission android:name="android.permission.CAMERA" />
6
+
7
+ <application>
8
+ <!-- Placeholder application entry, not used for libraries. -->
9
+ </application>
10
+
11
+ </manifest>
@@ -0,0 +1,37 @@
1
+ package com.reactnativerectangledocscanner
2
+
3
+ import com.facebook.react.bridge.Arguments
4
+ import com.facebook.react.bridge.Promise
5
+ import com.facebook.react.bridge.ReactApplicationContext
6
+ import com.facebook.react.bridge.ReactContextBaseJavaModule
7
+ import com.facebook.react.bridge.ReactMethod
8
+ import com.facebook.react.bridge.UiThreadUtil
9
+ import com.facebook.react.uimanager.UIManagerHelper
10
+ import com.facebook.react.uimanager.events.EventDispatcher
11
+
12
+ class RNRDocScannerModule(
13
+ private val reactContext: ReactApplicationContext,
14
+ ) : ReactContextBaseJavaModule(reactContext) {
15
+
16
+ override fun getName() = "RNRDocScannerModule"
17
+
18
+ @ReactMethod
19
+ fun capture(viewTag: Int, promise: Promise) {
20
+ UiThreadUtil.runOnUiThread {
21
+ val view = UIManagerHelper.getView(reactContext, viewTag) as? RNRDocScannerView
22
+ if (view == null) {
23
+ promise.reject("view_not_found", "Unable to locate DocScanner view.")
24
+ return@runOnUiThread
25
+ }
26
+ view.capture(promise)
27
+ }
28
+ }
29
+
30
+ @ReactMethod
31
+ fun reset(viewTag: Int) {
32
+ UiThreadUtil.runOnUiThread {
33
+ val view = UIManagerHelper.getView(reactContext, viewTag) as? RNRDocScannerView
34
+ view?.reset()
35
+ }
36
+ }
37
+ }
@@ -0,0 +1,16 @@
1
+ package com.reactnativerectangledocscanner
2
+
3
+ import com.facebook.react.bridge.ReactApplicationContext
4
+ import com.facebook.react.bridge.ReactPackage
5
+ import com.facebook.react.bridge.NativeModule
6
+ import com.facebook.react.uimanager.ViewManager
7
+
8
+ class RNRDocScannerPackage : ReactPackage {
9
+ override fun createNativeModules(reactContext: ReactApplicationContext): List<NativeModule> {
10
+ return listOf(RNRDocScannerModule(reactContext))
11
+ }
12
+
13
+ override fun createViewManagers(reactContext: ReactApplicationContext): List<ViewManager<*, *>> {
14
+ return listOf(RNRDocScannerViewManager(reactContext))
15
+ }
16
+ }
@@ -0,0 +1,129 @@
1
+ package com.reactnativerectangledocscanner
2
+
3
+ import android.content.Context
4
+ import android.graphics.Color
5
+ import android.util.AttributeSet
6
+ import android.util.Log
7
+ import android.widget.FrameLayout
8
+ import androidx.camera.core.ImageAnalysis
9
+ import androidx.camera.core.ImageCapture
10
+ import androidx.camera.lifecycle.ProcessCameraProvider
11
+ import androidx.camera.view.PreviewView
12
+ import androidx.core.content.ContextCompat
13
+ import com.facebook.react.bridge.Arguments
14
+ import com.facebook.react.bridge.Promise
15
+ import com.facebook.react.bridge.ReactContext
16
+ import com.facebook.react.bridge.WritableMap
17
+ import com.facebook.react.uimanager.events.RCTEventEmitter
18
+ import java.util.concurrent.ExecutorService
19
+ import java.util.concurrent.Executors
20
+
21
+ class RNRDocScannerView @JvmOverloads constructor(
22
+ context: Context,
23
+ attrs: AttributeSet? = null,
24
+ ) : FrameLayout(context, attrs) {
25
+
26
+ var detectionCountBeforeCapture: Int = 8
27
+ var autoCapture: Boolean = true
28
+ var enableTorch: Boolean = false
29
+ set(value) {
30
+ field = value
31
+ updateTorchMode(value)
32
+ }
33
+ var quality: Int = 90
34
+ var useBase64: Boolean = false
35
+
36
+ private val previewView: PreviewView = PreviewView(context)
37
+ private var cameraProvider: ProcessCameraProvider? = null
38
+ private var imageCapture: ImageCapture? = null
39
+ private var imageAnalysis: ImageAnalysis? = null
40
+ private var cameraExecutor: ExecutorService? = null
41
+ private var currentStableCounter: Int = 0
42
+ private var captureInFlight: Boolean = false
43
+
44
+ init {
45
+ setBackgroundColor(Color.BLACK)
46
+ addView(
47
+ previewView,
48
+ LayoutParams(LayoutParams.MATCH_PARENT, LayoutParams.MATCH_PARENT),
49
+ )
50
+ initializeCamera()
51
+ }
52
+
53
+ private fun initializeCamera() {
54
+ cameraExecutor = Executors.newSingleThreadExecutor()
55
+ val providerFuture = ProcessCameraProvider.getInstance(context)
56
+ providerFuture.addListener(
57
+ {
58
+ cameraProvider = providerFuture.get()
59
+ // TODO: Configure Preview + ImageAnalysis + ML Kit processing.
60
+ },
61
+ ContextCompat.getMainExecutor(context),
62
+ )
63
+ }
64
+
65
+ fun emitRectangle(rectangle: WritableMap?) {
66
+ val event: WritableMap = Arguments.createMap().apply {
67
+ if (rectangle != null) {
68
+ putMap("rectangleCoordinates", rectangle)
69
+ currentStableCounter = (currentStableCounter + 1).coerceAtMost(detectionCountBeforeCapture)
70
+ } else {
71
+ putNull("rectangleCoordinates")
72
+ currentStableCounter = 0
73
+ }
74
+ putInt("stableCounter", currentStableCounter)
75
+ // Frame size placeholders until analysis is wired.
76
+ putDouble("frameWidth", width.toDouble())
77
+ putDouble("frameHeight", height.toDouble())
78
+ }
79
+
80
+ (context as? ReactContext)
81
+ ?.getJSModule(RCTEventEmitter::class.java)
82
+ ?.receiveEvent(id, "onRectangleDetect", event)
83
+ }
84
+
85
+ fun emitPictureTaken(payload: WritableMap) {
86
+ (context as? ReactContext)
87
+ ?.getJSModule(RCTEventEmitter::class.java)
88
+ ?.receiveEvent(id, "onPictureTaken", payload)
89
+ }
90
+
91
+ fun capture(promise: Promise) {
92
+ if (captureInFlight) {
93
+ promise.reject("capture_in_progress", "A capture request is already running.")
94
+ return
95
+ }
96
+
97
+ val imageCapture = this.imageCapture
98
+ if (imageCapture == null) {
99
+ promise.reject("capture_unavailable", "Image capture is not initialised yet.")
100
+ return
101
+ }
102
+
103
+ captureInFlight = true
104
+ // TODO: Hook into ImageCapture#takePicture and ML Kit cropping.
105
+ postDelayed(
106
+ {
107
+ captureInFlight = false
108
+ promise.reject("not_implemented", "Native capture pipeline has not been implemented.")
109
+ },
110
+ 100,
111
+ )
112
+ }
113
+
114
+ fun reset() {
115
+ currentStableCounter = 0
116
+ }
117
+
118
+ private fun updateTorchMode(enabled: Boolean) {
119
+ // TODO: Toggle torch once camera is integrated.
120
+ Log.d("RNRDocScanner", "Torch set to $enabled (not yet wired).")
121
+ }
122
+
123
+ override fun onDetachedFromWindow() {
124
+ super.onDetachedFromWindow()
125
+ cameraExecutor?.shutdown()
126
+ cameraExecutor = null
127
+ cameraProvider?.unbindAll()
128
+ }
129
+ }
@@ -0,0 +1,50 @@
1
+ package com.reactnativerectangledocscanner
2
+
3
+ import com.facebook.react.bridge.ReactApplicationContext
4
+ import com.facebook.react.bridge.ReactContext
5
+ import com.facebook.react.uimanager.SimpleViewManager
6
+ import com.facebook.react.uimanager.ThemedReactContext
7
+ import com.facebook.react.uimanager.annotations.ReactProp
8
+
9
+ class RNRDocScannerViewManager(
10
+ private val reactContext: ReactApplicationContext,
11
+ ) : SimpleViewManager<RNRDocScannerView>() {
12
+
13
+ override fun getName() = "RNRDocScannerView"
14
+
15
+ override fun createViewInstance(reactContext: ThemedReactContext): RNRDocScannerView {
16
+ return RNRDocScannerView(reactContext)
17
+ }
18
+
19
+ override fun getExportedCustomDirectEventTypeConstants(): MutableMap<String, Any> {
20
+ return mutableMapOf(
21
+ "onRectangleDetect" to mapOf("registrationName" to "onRectangleDetect"),
22
+ "onPictureTaken" to mapOf("registrationName" to "onPictureTaken"),
23
+ )
24
+ }
25
+
26
+ @ReactProp(name = "detectionCountBeforeCapture", defaultInt = 8)
27
+ fun setDetectionCountBeforeCapture(view: RNRDocScannerView, value: Int) {
28
+ view.detectionCountBeforeCapture = value
29
+ }
30
+
31
+ @ReactProp(name = "autoCapture", defaultBoolean = true)
32
+ fun setAutoCapture(view: RNRDocScannerView, value: Boolean) {
33
+ view.autoCapture = value
34
+ }
35
+
36
+ @ReactProp(name = "enableTorch", defaultBoolean = false)
37
+ fun setEnableTorch(view: RNRDocScannerView, value: Boolean) {
38
+ view.enableTorch = value
39
+ }
40
+
41
+ @ReactProp(name = "quality", defaultInt = 90)
42
+ fun setQuality(view: RNRDocScannerView, value: Int) {
43
+ view.quality = value
44
+ }
45
+
46
+ @ReactProp(name = "useBase64", defaultBoolean = false)
47
+ fun setUseBase64(view: RNRDocScannerView, value: Boolean) {
48
+ view.useBase64 = value
49
+ }
50
+ }