neuromeka-grasp 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,206 @@
1
+ Metadata-Version: 2.4
2
+ Name: neuromeka_grasp
3
+ Version: 0.1.0
4
+ Summary: Python client utilities for Neuromeka GraspGen Docker server
5
+ Author-email: Neuromeka <support@neuromeka.com>
6
+ License: Proprietary
7
+ Project-URL: Homepage, https://github.com/neuromeka
8
+ Requires-Python: >=3.9
9
+ Description-Content-Type: text/markdown
10
+ Requires-Dist: numpy>=1.23
11
+ Requires-Dist: pyzmq>=25.0.0
12
+ Requires-Dist: opencv-python-headless>=4.5.0
13
+
14
+ # neuromeka_grasp
15
+
16
+ Python client utilities for the GraspGen Docker server (ZeroMQ + pickle RPC).
17
+ Import and use it directly with `from neuromeka_grasp import GraspGeneration`.
18
+
19
+ ## Installation
20
+ ```bash
21
+ pip install neuromeka_grasp
22
+ ```
23
+
24
+ ## Requirements
25
+ - Python >= 3.9
26
+ - A running GraspGen Docker server from `neuromeka-repo/nrmk_graspgen`
27
+ - Server port is fixed to `5558`; the client defaults to this port and you usually do not need to change it
28
+ - Dependencies: numpy, pyzmq, opencv-python-headless
29
+
30
+ ## Supported grippers (model presets)
31
+ These names map to server-side checkpoints (see `nrmk_graspgen/modules/graspgen/app.py`):
32
+ - `dh_ag_160_95` (default)
33
+ - `robotiq_2f_140`
34
+ - `franka_panda`
35
+ - `single_suction_cup_30mm`
36
+
37
+ You can also pass a `.yml` config path via `model` if the server has other checkpoints.
38
+
39
+ ## Quick start (depth + mask)
40
+ ```python
41
+ from neuromeka_grasp import GraspGeneration
42
+
43
+ client = GraspGeneration(hostname="localhost")
44
+ client.init(fx, fy, cx, cy, model="dh_ag_160_95")
45
+
46
+ resp = client.inference_from_depth_mask(
47
+ depth=depth_np,
48
+ mask=mask_np,
49
+ fx=fx, fy=fy, cx=cx, cy=cy,
50
+ enable_orientation_projection=True,
51
+ approach_axis_source="local_normal_avg",
52
+ enable_roll_projection=True,
53
+ target_roll_direction="auto",
54
+ enable_translation_projection=True,
55
+ translation_axis=-1,
56
+ desired_offset=0.03,
57
+ )
58
+
59
+ if resp["result"] == "SUCCESS":
60
+ grasps = resp["data"]["grasps"]
61
+ scores = resp["data"]["scores"]
62
+ ```
63
+
64
+
65
+ ## Point cloud example
66
+ ```python
67
+ import numpy as np
68
+ from neuromeka_grasp import GraspGeneration
69
+
70
+ client = GraspGeneration(hostname="localhost")
71
+ client.init(fx, fy, cx, cy, model="dh_ag_160_95")
72
+
73
+ object_pc = np.asarray(object_points, dtype=np.float32) # (N, 3)
74
+ scene_pc = np.asarray(scene_points, dtype=np.float32) # (M, 3)
75
+
76
+ resp = client.inference_from_point_cloud(
77
+ object_pc=object_pc,
78
+ scene_pc=scene_pc,
79
+ collision_check=True,
80
+ grasp_threshold=0.8,
81
+ num_grasps=200,
82
+ )
83
+ ```
84
+
85
+ ## Visualization (optional)
86
+ ```python
87
+ from neuromeka_grasp import draw_grasps_overlay
88
+
89
+ overlay = draw_grasps_overlay(
90
+ rgb=rgb_image,
91
+ grasps=grasps, # (K, 4, 4)
92
+ scores=scores, # (K,)
93
+ fx=fx, fy=fy, cx=cx, cy=cy,
94
+ )
95
+ ```
96
+
97
+ ## API overview
98
+ - `GraspGeneration`
99
+ - `init(fx, fy, cx, cy, model="dh_ag_160_95")`
100
+ - `inference_from_point_cloud(...)`
101
+ - `inference_from_depth_mask(...)`
102
+ - `point_cloud_outlier_removal(obj_pc, threshold=0.014, k=20)`
103
+ - `reset()`, `close()`
104
+ - `draw_grasps_overlay(...)`: Project predicted grasps onto an RGB image.
105
+
106
+ ## init() reference
107
+ **Signature**
108
+ `init(fx, fy, cx, cy, model="dh_ag_160_95")`
109
+
110
+ **Parameters**
111
+ - `fx`, `fy`: Focal lengths in pixels.
112
+ - `cx`, `cy`: Principal point in pixels.
113
+ - `model`: Gripper preset name (see Supported grippers) or a `.yml` config path.
114
+ Config paths are resolved on the server side, so use a path visible to the server container.
115
+
116
+ **Behavior**
117
+ - Stores intrinsics in the client for future calls.
118
+ - Initializes the server model; response includes `gripper_name`, `model_path`, and `intrinsics`.
119
+
120
+ ## inference_from_depth_mask() reference
121
+ **Signature**
122
+ `inference_from_depth_mask(depth, mask, fx=None, fy=None, cx=None, cy=None, ...)`
123
+
124
+ **Core inputs**
125
+ - `depth`: (H, W) depth image. Units are scaled by `depth_scale`.
126
+ - `mask`: (H, W) or (H, W, 1) segmentation mask. If `target_object_id` is set, only that label is used.
127
+ - `fx`, `fy`, `cx`, `cy`: Camera intrinsics. If omitted, values set by `init()` are used.
128
+
129
+ **Object/scene point clouds**
130
+ - `target_object_id`: If set, selects a single object label from `mask`.
131
+ - `depth_scale`: Multiplier applied to `depth` before projection.
132
+ - `max_object_points`: Random downsample limit for object points (None disables).
133
+ - `max_scene_points`: Random downsample limit for scene points (used only when collision_check=True).
134
+
135
+ **Sampling and filtering**
136
+ - `collision_check`: If True, runs collision filtering with the scene point cloud.
137
+ - `collision_threshold`: Distance (meters) for collision filtering.
138
+ - `grasp_threshold`: Minimum score threshold for grasps.
139
+ - `num_grasps`: Number of grasps to sample before filtering.
140
+ - `return_topk`: If True, also returns top-k grasps (server-side).
141
+ - `topk_num_grasps`: Number of top grasps to keep when `return_topk=True`.
142
+ - `min_grasps`: Minimum number of grasps to return (server-side).
143
+ - `max_tries`: Max sampling attempts (server-side).
144
+
145
+ **Orientation projection**
146
+ - `enable_orientation_projection`: Enable orientation constraint projection.
147
+ - `approach_axis_index`: Gripper-frame approach axis index (0=x, 1=y, 2=z).
148
+ - `approach_axis_source`: `"local_normal"`, `"local_normal_avg"`, or `"global_pca"`.
149
+ - `normal_avg_k`: KNN count for averaging normals in `"local_normal_avg"`.
150
+ - `target_approach_direction`: Target approach direction (3-vector, camera frame).
151
+ - `step_strength_orient`: Orientation correction strength in [0, 1].
152
+ - `use_surface_normal_for_approach`: Legacy switch; affects default behavior when `approach_axis_source` is not set.
153
+ - `contact_proxy_offset`: Offset (meters) along approach axis for contact proxy.
154
+
155
+ **Translation projection**
156
+ - `enable_translation_projection`: Enable translation constraint projection.
157
+ - `translation_axis`: PCA axis index (`-1` = principal axis).
158
+ - `desired_offset`: Target offset (meters) along PCA axis.
159
+ - `step_strength_trans`: Translation correction strength in [0, 1].
160
+
161
+ **Roll projection**
162
+ - `enable_roll_projection`: Enable roll projection around approach axis.
163
+ - `jaw_axis_index`: Gripper-frame jaw axis index (0=x, 1=y, 2=z).
164
+ - `target_roll_direction`: Target jaw direction (3-vector, camera frame) or `"auto"`.
165
+ - `roll_target_axis`: PCA axis index when `target_roll_direction="auto"`.
166
+ - `roll_target_axis_sign`: Optional sign override (+1 or -1) for the PCA axis.
167
+ - `roll_use_local_tangent`: Use local tangent (2D PCA) for `"auto"` roll targets (server support required).
168
+ - `step_strength_roll`: Roll correction strength in [0, 1].
169
+
170
+ **PCA sign and debug**
171
+ - `pca_axis_sign_ref`: PCA sign reference (`"camera_x"`, `"roll_ref"`, or a 3-vector).
172
+ - `projection_debug`: Enable projection debug logging on the server.
173
+
174
+ ## Axis convention notes
175
+ - The rotation matrix columns represent the gripper-frame axes expressed in the camera frame.
176
+ - `approach_axis_index` and `jaw_axis_index` refer to those gripper-frame axes.
177
+ - When using `"global_pca"`, some server builds allow `approach_axis_index` or `roll_target_axis`
178
+ values `-10/-11/-12` to select the opposite direction of PCA axis 0/1/2.
179
+
180
+ ## PCA sign note
181
+ PCA eigenvectors can flip sign across runs. Use `pca_axis_sign_ref` to keep sign consistent
182
+ and avoid ambiguous translation/roll directions.
183
+
184
+ ## Inputs and shapes
185
+ - `depth`: HxW float or uint depth image.
186
+ - `mask`: HxW or HxWx1 integer mask. If `target_object_id` is set, only that label is used.
187
+ - `depth_scale`: Multiplier applied to `depth` before projection.
188
+ - `object_pc`, `scene_pc`: (N, 3) and (M, 3) float32 point clouds in camera frame.
189
+ - `grasps`: (K, 4, 4) homogeneous poses, `scores`: (K,) in [0, 1].
190
+
191
+ ## Projection parameters (summary)
192
+ - Orientation: `enable_orientation_projection`, `approach_axis_index`,
193
+ `approach_axis_source` (`local_normal`, `local_normal_avg`, `global_pca`),
194
+ `normal_avg_k`, `contact_proxy_offset`
195
+ - Translation: `enable_translation_projection`, `translation_axis`, `desired_offset`
196
+ - Roll: `enable_roll_projection`, `target_roll_direction` (supports `"auto"`),
197
+ `roll_target_axis`, `roll_target_axis_sign`, `roll_use_local_tangent`
198
+ - Sign control and debug: `pca_axis_sign_ref`, `projection_debug`
199
+
200
+ For detailed parameter semantics, refer to the server README.
201
+
202
+ ## Notes
203
+ - `inference_from_depth_mask` requires intrinsics set via `init()` or passed directly.
204
+ - Server port is fixed at `5558`; the client default matches this and typically should not be changed.
205
+ - `PickleClient` is synchronous and blocking; set timeouts or retry logic on the server side if needed.
206
+ - Pickle is not secure against untrusted sources. Use only in trusted environments.
@@ -0,0 +1,193 @@
1
+ # neuromeka_grasp
2
+
3
+ Python client utilities for the GraspGen Docker server (ZeroMQ + pickle RPC).
4
+ Import and use it directly with `from neuromeka_grasp import GraspGeneration`.
5
+
6
+ ## Installation
7
+ ```bash
8
+ pip install neuromeka_grasp
9
+ ```
10
+
11
+ ## Requirements
12
+ - Python >= 3.9
13
+ - A running GraspGen Docker server from `neuromeka-repo/nrmk_graspgen`
14
+ - Server port is fixed to `5558`; the client defaults to this port and you usually do not need to change it
15
+ - Dependencies: numpy, pyzmq, opencv-python-headless
16
+
17
+ ## Supported grippers (model presets)
18
+ These names map to server-side checkpoints (see `nrmk_graspgen/modules/graspgen/app.py`):
19
+ - `dh_ag_160_95` (default)
20
+ - `robotiq_2f_140`
21
+ - `franka_panda`
22
+ - `single_suction_cup_30mm`
23
+
24
+ You can also pass a `.yml` config path via `model` if the server has other checkpoints.
25
+
26
+ ## Quick start (depth + mask)
27
+ ```python
28
+ from neuromeka_grasp import GraspGeneration
29
+
30
+ client = GraspGeneration(hostname="localhost")
31
+ client.init(fx, fy, cx, cy, model="dh_ag_160_95")
32
+
33
+ resp = client.inference_from_depth_mask(
34
+ depth=depth_np,
35
+ mask=mask_np,
36
+ fx=fx, fy=fy, cx=cx, cy=cy,
37
+ enable_orientation_projection=True,
38
+ approach_axis_source="local_normal_avg",
39
+ enable_roll_projection=True,
40
+ target_roll_direction="auto",
41
+ enable_translation_projection=True,
42
+ translation_axis=-1,
43
+ desired_offset=0.03,
44
+ )
45
+
46
+ if resp["result"] == "SUCCESS":
47
+ grasps = resp["data"]["grasps"]
48
+ scores = resp["data"]["scores"]
49
+ ```
50
+
51
+
52
+ ## Point cloud example
53
+ ```python
54
+ import numpy as np
55
+ from neuromeka_grasp import GraspGeneration
56
+
57
+ client = GraspGeneration(hostname="localhost")
58
+ client.init(fx, fy, cx, cy, model="dh_ag_160_95")
59
+
60
+ object_pc = np.asarray(object_points, dtype=np.float32) # (N, 3)
61
+ scene_pc = np.asarray(scene_points, dtype=np.float32) # (M, 3)
62
+
63
+ resp = client.inference_from_point_cloud(
64
+ object_pc=object_pc,
65
+ scene_pc=scene_pc,
66
+ collision_check=True,
67
+ grasp_threshold=0.8,
68
+ num_grasps=200,
69
+ )
70
+ ```
71
+
72
+ ## Visualization (optional)
73
+ ```python
74
+ from neuromeka_grasp import draw_grasps_overlay
75
+
76
+ overlay = draw_grasps_overlay(
77
+ rgb=rgb_image,
78
+ grasps=grasps, # (K, 4, 4)
79
+ scores=scores, # (K,)
80
+ fx=fx, fy=fy, cx=cx, cy=cy,
81
+ )
82
+ ```
83
+
84
+ ## API overview
85
+ - `GraspGeneration`
86
+ - `init(fx, fy, cx, cy, model="dh_ag_160_95")`
87
+ - `inference_from_point_cloud(...)`
88
+ - `inference_from_depth_mask(...)`
89
+ - `point_cloud_outlier_removal(obj_pc, threshold=0.014, k=20)`
90
+ - `reset()`, `close()`
91
+ - `draw_grasps_overlay(...)`: Project predicted grasps onto an RGB image.
92
+
93
+ ## init() reference
94
+ **Signature**
95
+ `init(fx, fy, cx, cy, model="dh_ag_160_95")`
96
+
97
+ **Parameters**
98
+ - `fx`, `fy`: Focal lengths in pixels.
99
+ - `cx`, `cy`: Principal point in pixels.
100
+ - `model`: Gripper preset name (see Supported grippers) or a `.yml` config path.
101
+ Config paths are resolved on the server side, so use a path visible to the server container.
102
+
103
+ **Behavior**
104
+ - Stores intrinsics in the client for future calls.
105
+ - Initializes the server model; response includes `gripper_name`, `model_path`, and `intrinsics`.
106
+
107
+ ## inference_from_depth_mask() reference
108
+ **Signature**
109
+ `inference_from_depth_mask(depth, mask, fx=None, fy=None, cx=None, cy=None, ...)`
110
+
111
+ **Core inputs**
112
+ - `depth`: (H, W) depth image. Units are scaled by `depth_scale`.
113
+ - `mask`: (H, W) or (H, W, 1) segmentation mask. If `target_object_id` is set, only that label is used.
114
+ - `fx`, `fy`, `cx`, `cy`: Camera intrinsics. If omitted, values set by `init()` are used.
115
+
116
+ **Object/scene point clouds**
117
+ - `target_object_id`: If set, selects a single object label from `mask`.
118
+ - `depth_scale`: Multiplier applied to `depth` before projection.
119
+ - `max_object_points`: Random downsample limit for object points (None disables).
120
+ - `max_scene_points`: Random downsample limit for scene points (used only when collision_check=True).
121
+
122
+ **Sampling and filtering**
123
+ - `collision_check`: If True, runs collision filtering with the scene point cloud.
124
+ - `collision_threshold`: Distance (meters) for collision filtering.
125
+ - `grasp_threshold`: Minimum score threshold for grasps.
126
+ - `num_grasps`: Number of grasps to sample before filtering.
127
+ - `return_topk`: If True, also returns top-k grasps (server-side).
128
+ - `topk_num_grasps`: Number of top grasps to keep when `return_topk=True`.
129
+ - `min_grasps`: Minimum number of grasps to return (server-side).
130
+ - `max_tries`: Max sampling attempts (server-side).
131
+
132
+ **Orientation projection**
133
+ - `enable_orientation_projection`: Enable orientation constraint projection.
134
+ - `approach_axis_index`: Gripper-frame approach axis index (0=x, 1=y, 2=z).
135
+ - `approach_axis_source`: `"local_normal"`, `"local_normal_avg"`, or `"global_pca"`.
136
+ - `normal_avg_k`: KNN count for averaging normals in `"local_normal_avg"`.
137
+ - `target_approach_direction`: Target approach direction (3-vector, camera frame).
138
+ - `step_strength_orient`: Orientation correction strength in [0, 1].
139
+ - `use_surface_normal_for_approach`: Legacy switch; affects default behavior when `approach_axis_source` is not set.
140
+ - `contact_proxy_offset`: Offset (meters) along approach axis for contact proxy.
141
+
142
+ **Translation projection**
143
+ - `enable_translation_projection`: Enable translation constraint projection.
144
+ - `translation_axis`: PCA axis index (`-1` = principal axis).
145
+ - `desired_offset`: Target offset (meters) along PCA axis.
146
+ - `step_strength_trans`: Translation correction strength in [0, 1].
147
+
148
+ **Roll projection**
149
+ - `enable_roll_projection`: Enable roll projection around approach axis.
150
+ - `jaw_axis_index`: Gripper-frame jaw axis index (0=x, 1=y, 2=z).
151
+ - `target_roll_direction`: Target jaw direction (3-vector, camera frame) or `"auto"`.
152
+ - `roll_target_axis`: PCA axis index when `target_roll_direction="auto"`.
153
+ - `roll_target_axis_sign`: Optional sign override (+1 or -1) for the PCA axis.
154
+ - `roll_use_local_tangent`: Use local tangent (2D PCA) for `"auto"` roll targets (server support required).
155
+ - `step_strength_roll`: Roll correction strength in [0, 1].
156
+
157
+ **PCA sign and debug**
158
+ - `pca_axis_sign_ref`: PCA sign reference (`"camera_x"`, `"roll_ref"`, or a 3-vector).
159
+ - `projection_debug`: Enable projection debug logging on the server.
160
+
161
+ ## Axis convention notes
162
+ - The rotation matrix columns represent the gripper-frame axes expressed in the camera frame.
163
+ - `approach_axis_index` and `jaw_axis_index` refer to those gripper-frame axes.
164
+ - When using `"global_pca"`, some server builds allow `approach_axis_index` or `roll_target_axis`
165
+ values `-10/-11/-12` to select the opposite direction of PCA axis 0/1/2.
166
+
167
+ ## PCA sign note
168
+ PCA eigenvectors can flip sign across runs. Use `pca_axis_sign_ref` to keep sign consistent
169
+ and avoid ambiguous translation/roll directions.
170
+
171
+ ## Inputs and shapes
172
+ - `depth`: HxW float or uint depth image.
173
+ - `mask`: HxW or HxWx1 integer mask. If `target_object_id` is set, only that label is used.
174
+ - `depth_scale`: Multiplier applied to `depth` before projection.
175
+ - `object_pc`, `scene_pc`: (N, 3) and (M, 3) float32 point clouds in camera frame.
176
+ - `grasps`: (K, 4, 4) homogeneous poses, `scores`: (K,) in [0, 1].
177
+
178
+ ## Projection parameters (summary)
179
+ - Orientation: `enable_orientation_projection`, `approach_axis_index`,
180
+ `approach_axis_source` (`local_normal`, `local_normal_avg`, `global_pca`),
181
+ `normal_avg_k`, `contact_proxy_offset`
182
+ - Translation: `enable_translation_projection`, `translation_axis`, `desired_offset`
183
+ - Roll: `enable_roll_projection`, `target_roll_direction` (supports `"auto"`),
184
+ `roll_target_axis`, `roll_target_axis_sign`, `roll_use_local_tangent`
185
+ - Sign control and debug: `pca_axis_sign_ref`, `projection_debug`
186
+
187
+ For detailed parameter semantics, refer to the server README.
188
+
189
+ ## Notes
190
+ - `inference_from_depth_mask` requires intrinsics set via `init()` or passed directly.
191
+ - Server port is fixed at `5558`; the client default matches this and typically should not be changed.
192
+ - `PickleClient` is synchronous and blocking; set timeouts or retry logic on the server side if needed.
193
+ - Pickle is not secure against untrusted sources. Use only in trusted environments.
@@ -0,0 +1,4 @@
1
+ from .client import GraspGeneration, PickleClient
2
+ from .overlay import draw_grasps_overlay
3
+
4
+ __all__ = ["GraspGeneration", "PickleClient", "draw_grasps_overlay"]
@@ -0,0 +1,373 @@
1
+ import pickle
2
+ from pathlib import Path
3
+ from typing import Optional, Sequence
4
+
5
+ import numpy as np
6
+ import zmq
7
+
8
+
9
+ class PickleClient:
10
+ """ZeroMQ 기반 간단한 pickle RPC 클라이언트."""
11
+
12
+ def __init__(self, hostname: str, port: int = 5558):
13
+ self.hostname = hostname
14
+ self.port = port
15
+ self.context = zmq.Context()
16
+ self.socket = self.context.socket(zmq.REQ)
17
+ self.socket.connect(f"tcp://{self.hostname}:{self.port}")
18
+
19
+ def send_data(self, data: dict):
20
+ self.socket.send(pickle.dumps(data))
21
+ response = pickle.loads(self.socket.recv())
22
+ return response
23
+
24
+ def close(self):
25
+ self.socket.close()
26
+ self.context.term()
27
+
28
+
29
+ class GraspGeneration:
30
+ """GraspGen Docker 서버용 클라이언트.
31
+
32
+ 서버가 제공하는 projection 옵션을 모두 전달할 수 있도록 vfm_tester 및 nrmk_graspgen 양쪽을 통합했습니다.
33
+ """
34
+
35
+ def __init__(self, hostname: str, port: int = 5558):
36
+ self.client = PickleClient(hostname, port)
37
+ self.intrinsics: Optional[dict] = None
38
+ self.model: Optional[str] = None
39
+
40
+ # ---------- 기본 유틸 ----------
41
+ @staticmethod
42
+ def _to_2d_mask(mask):
43
+ if mask is None:
44
+ return None
45
+ if mask.ndim == 3 and mask.shape[-1] == 1:
46
+ return mask[:, :, 0]
47
+ return mask
48
+
49
+ @staticmethod
50
+ def _downsample_points(points: np.ndarray, max_points: Optional[int]):
51
+ if max_points is None or max_points <= 0:
52
+ return points
53
+ if points.shape[0] <= max_points:
54
+ return points
55
+ idx = np.random.choice(points.shape[0], max_points, replace=False)
56
+ return points[idx]
57
+
58
+ @staticmethod
59
+ def depth_to_points(
60
+ depth: np.ndarray,
61
+ fx: float,
62
+ fy: float,
63
+ cx: float,
64
+ cy: float,
65
+ mask: Optional[np.ndarray] = None,
66
+ depth_scale: float = 1.0,
67
+ ) -> np.ndarray:
68
+ depth = depth.astype(np.float32) * depth_scale
69
+ h, w = depth.shape
70
+ xs = np.arange(w)
71
+ ys = np.arange(h)
72
+ xmap, ymap = np.meshgrid(xs, ys)
73
+
74
+ valid = depth > 0
75
+ if mask is not None:
76
+ valid = valid & mask
77
+
78
+ z = depth[valid]
79
+ if z.size == 0:
80
+ return np.zeros((0, 3), dtype=np.float32)
81
+
82
+ x = (xmap[valid] - cx) * z / fx
83
+ y = (ymap[valid] - cy) * z / fy
84
+ return np.stack([x, y, z], axis=1).astype(np.float32)
85
+
86
+ def depth_mask_to_point_clouds(
87
+ self,
88
+ depth: np.ndarray,
89
+ mask: np.ndarray,
90
+ fx: float,
91
+ fy: float,
92
+ cx: float,
93
+ cy: float,
94
+ target_object_id: Optional[int] = None,
95
+ depth_scale: float = 1.0,
96
+ remove_object_from_scene: bool = True,
97
+ ) -> tuple[np.ndarray, np.ndarray]:
98
+ mask = self._to_2d_mask(mask)
99
+ if target_object_id is None:
100
+ obj_mask = mask > 0
101
+ else:
102
+ obj_mask = mask == target_object_id
103
+
104
+ object_pc = self.depth_to_points(
105
+ depth, fx, fy, cx, cy, mask=obj_mask, depth_scale=depth_scale
106
+ )
107
+
108
+ scene_mask = depth > 0
109
+ if remove_object_from_scene:
110
+ scene_mask = scene_mask & (~obj_mask)
111
+ scene_pc = self.depth_to_points(
112
+ depth, fx, fy, cx, cy, mask=scene_mask, depth_scale=depth_scale
113
+ )
114
+
115
+ return object_pc, scene_pc
116
+
117
+ @staticmethod
118
+ def _mean_knn_distances_numpy(points: np.ndarray, k_eff: int, chunk_size: int = 1024):
119
+ n = points.shape[0]
120
+ mean_dists = np.empty(n, dtype=np.float32)
121
+ for start in range(0, n, chunk_size):
122
+ end = min(start + chunk_size, n)
123
+ diff = points[start:end, None, :] - points[None, :, :]
124
+ dists = np.linalg.norm(diff, axis=-1)
125
+ if k_eff < n:
126
+ idx = np.argpartition(dists, kth=k_eff - 1, axis=1)[:, :k_eff]
127
+ row = np.take_along_axis(dists, idx, axis=1)
128
+ else:
129
+ row = np.sort(dists, axis=1)
130
+ if row.shape[1] > 1:
131
+ mean = row[:, 1:].mean(axis=1)
132
+ else:
133
+ mean = row[:, 0]
134
+ mean_dists[start:end] = mean.astype(np.float32)
135
+ return mean_dists
136
+
137
+ def point_cloud_outlier_removal(
138
+ self, obj_pc: np.ndarray, threshold: float = 0.014, k: int = 20
139
+ ) -> tuple[np.ndarray, np.ndarray]:
140
+ obj_pc = np.asarray(obj_pc, dtype=np.float32)
141
+ if obj_pc.ndim != 2 or obj_pc.shape[1] != 3:
142
+ raise ValueError("obj_pc must be Nx3")
143
+ if obj_pc.shape[0] == 0:
144
+ return obj_pc.copy(), obj_pc.copy()
145
+
146
+ k_eff = min(k + 1, obj_pc.shape[0])
147
+ mean_dists = None
148
+ try:
149
+ from scipy.spatial import cKDTree
150
+
151
+ tree = cKDTree(obj_pc)
152
+ dists, _ = tree.query(obj_pc, k=k_eff)
153
+ if k_eff > 1:
154
+ mean_dists = dists[:, 1:].mean(axis=1)
155
+ else:
156
+ mean_dists = dists[:, 0]
157
+ except Exception:
158
+ mean_dists = self._mean_knn_distances_numpy(obj_pc, k_eff)
159
+
160
+ mask = mean_dists < threshold
161
+ filtered_pc = obj_pc[mask]
162
+ removed_pc = obj_pc[~mask]
163
+ return filtered_pc, removed_pc
164
+
165
+ # ---------- RPC ----------
166
+ def init(self, fx: float, fy: float, cx: float, cy: float, model: str = "dh_ag_160_95"):
167
+ self.intrinsics = {"fx": fx, "fy": fy, "cx": cx, "cy": cy}
168
+ self.model = model
169
+ payload = {"operation": "init", "model": model, "intrinsics": self.intrinsics}
170
+ return self.client.send_data(payload)
171
+
172
+ def inference_from_point_cloud(
173
+ self,
174
+ object_pc: np.ndarray,
175
+ scene_pc: Optional[np.ndarray] = None,
176
+ collision_check: bool = False,
177
+ grasp_threshold: float = 0.8,
178
+ num_grasps: int = 200,
179
+ return_topk: bool = False,
180
+ topk_num_grasps: int = -1,
181
+ collision_threshold: float = 0.02,
182
+ min_grasps: Optional[int] = None,
183
+ max_tries: Optional[int] = None,
184
+ enable_orientation_projection: Optional[bool] = None,
185
+ enable_translation_projection: Optional[bool] = None,
186
+ approach_axis_index: Optional[int] = None,
187
+ approach_axis_source: Optional[str] = None,
188
+ normal_avg_k: Optional[int] = None,
189
+ target_approach_direction: Optional[Sequence[float]] = None,
190
+ translation_axis: Optional[int] = None,
191
+ desired_offset: Optional[float] = None,
192
+ step_strength_orient: Optional[float] = None,
193
+ step_strength_trans: Optional[float] = None,
194
+ enable_roll_projection: Optional[bool] = None,
195
+ jaw_axis_index: Optional[int] = None,
196
+ target_roll_direction: Optional[Sequence[float] | str] = None,
197
+ roll_target_axis: Optional[int] = None,
198
+ roll_target_axis_sign: Optional[int] = None,
199
+ roll_use_local_tangent: Optional[bool] = None,
200
+ step_strength_roll: Optional[float] = None,
201
+ use_surface_normal_for_approach: Optional[bool] = None,
202
+ contact_proxy_offset: Optional[float] = None,
203
+ pca_axis_sign_ref: Optional[Sequence[float] | str] = None,
204
+ projection_debug: Optional[bool] = None,
205
+ ):
206
+ payload = {
207
+ "operation": "inference",
208
+ "object_pc": object_pc,
209
+ "grasp_threshold": grasp_threshold,
210
+ "num_grasps": num_grasps,
211
+ "return_topk": return_topk,
212
+ "topk_num_grasps": topk_num_grasps,
213
+ "collision_check": collision_check,
214
+ "collision_threshold": collision_threshold,
215
+ }
216
+ if min_grasps is not None:
217
+ payload["min_grasps"] = min_grasps
218
+ if max_tries is not None:
219
+ payload["max_tries"] = max_tries
220
+ if enable_orientation_projection is not None:
221
+ payload["enable_orientation_projection"] = enable_orientation_projection
222
+ if enable_translation_projection is not None:
223
+ payload["enable_translation_projection"] = enable_translation_projection
224
+ if approach_axis_index is not None:
225
+ payload["approach_axis_index"] = approach_axis_index
226
+ if approach_axis_source is not None:
227
+ payload["approach_axis_source"] = approach_axis_source
228
+ if normal_avg_k is not None:
229
+ payload["normal_avg_k"] = normal_avg_k
230
+ if target_approach_direction is not None:
231
+ payload["target_approach_direction"] = target_approach_direction
232
+ if translation_axis is not None:
233
+ payload["translation_axis"] = translation_axis
234
+ if desired_offset is not None:
235
+ payload["desired_offset"] = desired_offset
236
+ if step_strength_orient is not None:
237
+ payload["step_strength_orient"] = step_strength_orient
238
+ if step_strength_trans is not None:
239
+ payload["step_strength_trans"] = step_strength_trans
240
+ if enable_roll_projection is not None:
241
+ payload["enable_roll_projection"] = enable_roll_projection
242
+ if jaw_axis_index is not None:
243
+ payload["jaw_axis_index"] = jaw_axis_index
244
+ if target_roll_direction is not None:
245
+ payload["target_roll_direction"] = target_roll_direction
246
+ if roll_target_axis is not None:
247
+ payload["roll_target_axis"] = roll_target_axis
248
+ if roll_target_axis_sign is not None:
249
+ payload["roll_target_axis_sign"] = roll_target_axis_sign
250
+ if roll_use_local_tangent is not None:
251
+ payload["roll_use_local_tangent"] = roll_use_local_tangent
252
+ if step_strength_roll is not None:
253
+ payload["step_strength_roll"] = step_strength_roll
254
+ if use_surface_normal_for_approach is not None:
255
+ payload["use_surface_normal_for_approach"] = use_surface_normal_for_approach
256
+ if contact_proxy_offset is not None:
257
+ payload["contact_proxy_offset"] = contact_proxy_offset
258
+ if pca_axis_sign_ref is not None:
259
+ payload["pca_axis_sign_ref"] = pca_axis_sign_ref
260
+ if projection_debug is not None:
261
+ payload["projection_debug"] = projection_debug
262
+ if collision_check and scene_pc is not None:
263
+ payload["scene_pc"] = scene_pc
264
+ return self.client.send_data(payload)
265
+
266
+ def inference_from_depth_mask(
267
+ self,
268
+ depth: np.ndarray,
269
+ mask: np.ndarray,
270
+ fx: Optional[float] = None,
271
+ fy: Optional[float] = None,
272
+ cx: Optional[float] = None,
273
+ cy: Optional[float] = None,
274
+ target_object_id: Optional[int] = None,
275
+ depth_scale: float = 1.0,
276
+ max_object_points: Optional[int] = None,
277
+ max_scene_points: int = 8192,
278
+ collision_check: bool = False,
279
+ grasp_threshold: float = 0.8,
280
+ num_grasps: int = 200,
281
+ return_topk: bool = False,
282
+ topk_num_grasps: int = -1,
283
+ collision_threshold: float = 0.02,
284
+ min_grasps: Optional[int] = None,
285
+ max_tries: Optional[int] = None,
286
+ enable_orientation_projection: Optional[bool] = None,
287
+ enable_translation_projection: Optional[bool] = None,
288
+ approach_axis_index: Optional[int] = None,
289
+ approach_axis_source: Optional[str] = None,
290
+ normal_avg_k: Optional[int] = None,
291
+ target_approach_direction: Optional[Sequence[float]] = None,
292
+ translation_axis: Optional[int] = None,
293
+ desired_offset: Optional[float] = None,
294
+ step_strength_orient: Optional[float] = None,
295
+ step_strength_trans: Optional[float] = None,
296
+ enable_roll_projection: Optional[bool] = None,
297
+ jaw_axis_index: Optional[int] = None,
298
+ target_roll_direction: Optional[Sequence[float] | str] = None,
299
+ roll_target_axis: Optional[int] = None,
300
+ roll_target_axis_sign: Optional[int] = None,
301
+ roll_use_local_tangent: Optional[bool] = None,
302
+ step_strength_roll: Optional[float] = None,
303
+ use_surface_normal_for_approach: Optional[bool] = None,
304
+ contact_proxy_offset: Optional[float] = None,
305
+ pca_axis_sign_ref: Optional[Sequence[float] | str] = None,
306
+ projection_debug: Optional[bool] = None,
307
+ ):
308
+ if self.intrinsics is None and None in (fx, fy, cx, cy):
309
+ raise ValueError("Camera intrinsics not set. Call init() or pass fx,fy,cx,cy.")
310
+
311
+ if fx is None:
312
+ fx = self.intrinsics["fx"]
313
+ fy = self.intrinsics["fy"]
314
+ cx = self.intrinsics["cx"]
315
+ cy = self.intrinsics["cy"]
316
+
317
+ object_pc, scene_pc = self.depth_mask_to_point_clouds(
318
+ depth,
319
+ mask,
320
+ fx,
321
+ fy,
322
+ cx,
323
+ cy,
324
+ target_object_id=target_object_id,
325
+ depth_scale=depth_scale,
326
+ remove_object_from_scene=True,
327
+ )
328
+ object_pc = self._downsample_points(object_pc, max_object_points)
329
+
330
+ if collision_check:
331
+ scene_pc = self._downsample_points(scene_pc, max_scene_points)
332
+ else:
333
+ scene_pc = None
334
+
335
+ return self.inference_from_point_cloud(
336
+ object_pc=object_pc,
337
+ scene_pc=scene_pc,
338
+ collision_check=collision_check,
339
+ grasp_threshold=grasp_threshold,
340
+ num_grasps=num_grasps,
341
+ return_topk=return_topk,
342
+ topk_num_grasps=topk_num_grasps,
343
+ collision_threshold=collision_threshold,
344
+ min_grasps=min_grasps,
345
+ max_tries=max_tries,
346
+ enable_orientation_projection=enable_orientation_projection,
347
+ enable_translation_projection=enable_translation_projection,
348
+ approach_axis_index=approach_axis_index,
349
+ approach_axis_source=approach_axis_source,
350
+ normal_avg_k=normal_avg_k,
351
+ target_approach_direction=target_approach_direction,
352
+ translation_axis=translation_axis,
353
+ desired_offset=desired_offset,
354
+ step_strength_orient=step_strength_orient,
355
+ step_strength_trans=step_strength_trans,
356
+ enable_roll_projection=enable_roll_projection,
357
+ jaw_axis_index=jaw_axis_index,
358
+ target_roll_direction=target_roll_direction,
359
+ roll_target_axis=roll_target_axis,
360
+ roll_target_axis_sign=roll_target_axis_sign,
361
+ roll_use_local_tangent=roll_use_local_tangent,
362
+ step_strength_roll=step_strength_roll,
363
+ use_surface_normal_for_approach=use_surface_normal_for_approach,
364
+ contact_proxy_offset=contact_proxy_offset,
365
+ pca_axis_sign_ref=pca_axis_sign_ref,
366
+ projection_debug=projection_debug,
367
+ )
368
+
369
+ def reset(self):
370
+ return self.client.send_data({"operation": "reset"})
371
+
372
+ def close(self):
373
+ self.client.close()
@@ -0,0 +1,103 @@
1
+ import numpy as np
2
+ import cv2
3
+
4
+
5
+ def _project_points(points, fx, fy, cx, cy, image_shape):
6
+ z = points[:, 2]
7
+ valid = z > 1e-6
8
+ u = fx * points[:, 0] / z + cx
9
+ v = fy * points[:, 1] / z + cy
10
+ h, w = image_shape
11
+ valid = valid & (u >= 0) & (u < w) & (v >= 0) & (v < h)
12
+ return np.stack([u, v], axis=1), valid
13
+
14
+
15
+ def _get_color_from_score(scores):
16
+ scores = np.clip(scores, 0.0, 1.0)
17
+ colors = np.stack([scores, 1.0 - scores, np.zeros_like(scores)], axis=1) * 255.0
18
+ return colors.astype(np.uint8)
19
+
20
+
21
+ def draw_grasps_overlay(
22
+ rgb,
23
+ grasps,
24
+ scores,
25
+ fx,
26
+ fy,
27
+ cx,
28
+ cy,
29
+ max_grasps=50,
30
+ line_width=2,
31
+ labels=None,
32
+ label_color=(255, 255, 255),
33
+ label_scale=0.5,
34
+ label_thickness=1,
35
+ control_points=None,
36
+ ):
37
+ """
38
+ RGB 이미지에 grasp를 투영해 시각화합니다.
39
+
40
+ Args:
41
+ rgb: HxWx3 uint8 이미지
42
+ grasps: Kx4x4 grasp poses
43
+ scores: K 점수
44
+ control_points: (옵션) list of (M,3) arrays per grasp. 주어지면 해당 선분을 그림.
45
+ """
46
+ if rgb is None or grasps is None or len(grasps) == 0:
47
+ return None
48
+
49
+ frame = rgb.copy()
50
+ if frame.dtype != np.uint8:
51
+ frame = np.clip(frame, 0, 255).astype(np.uint8)
52
+
53
+ grasps = np.asarray(grasps)
54
+ scores = np.asarray(scores).reshape(-1)
55
+ image_shape = frame.shape[:2]
56
+ order = np.argsort(scores)[::-1]
57
+ if max_grasps > 0:
58
+ order = order[:max_grasps]
59
+ colors = _get_color_from_score(scores[order])
60
+
61
+ axis_length_m = 0.05
62
+ axis_colors = [(0, 0, 255), (0, 255, 0), (255, 0, 0)] # x,y,z
63
+
64
+ for idx, grasp_idx in enumerate(order):
65
+ grasp = grasps[grasp_idx]
66
+ origin = grasp[:3, 3]
67
+ color = tuple(int(c) for c in colors[idx])
68
+ origin_uv, origin_valid = _project_points(origin[None, :], fx, fy, cx, cy, image_shape)
69
+ if not origin_valid[0]:
70
+ continue
71
+ ox, oy = int(origin_uv[0][0]), int(origin_uv[0][1])
72
+ cv2.circle(frame, (ox, oy), max(2, line_width + 1), color, line_width)
73
+
74
+ rot = grasp[:3, :3]
75
+ for axis_idx, axis_dir in enumerate([rot[:, 0], rot[:, 1], rot[:, 2]]):
76
+ end_pt = origin + axis_dir * axis_length_m
77
+ end_uv, end_valid = _project_points(end_pt[None, :], fx, fy, cx, cy, image_shape)
78
+ if end_valid[0]:
79
+ ex, ey = int(end_uv[0][0]), int(end_uv[0][1])
80
+ cv2.line(frame, (ox, oy), (ex, ey), axis_colors[axis_idx], line_width)
81
+
82
+ if control_points is not None and grasp_idx < len(control_points):
83
+ pts = np.asarray(control_points[grasp_idx], dtype=np.float32)
84
+ pts_cam = (grasp[:3, :3] @ pts.T).T + grasp[:3, 3]
85
+ uv, valid = _project_points(pts_cam, fx, fy, cx, cy, image_shape)
86
+ for i in range(len(uv) - 1):
87
+ if valid[i] and valid[i + 1]:
88
+ p0 = (int(uv[i][0]), int(uv[i][1]))
89
+ p1 = (int(uv[i + 1][0]), int(uv[i + 1][1]))
90
+ cv2.line(frame, p0, p1, color, line_width)
91
+
92
+ if labels is not None and grasp_idx < len(labels):
93
+ cv2.putText(
94
+ frame,
95
+ str(labels[grasp_idx]),
96
+ (ox + 4, oy - 6),
97
+ cv2.FONT_HERSHEY_SIMPLEX,
98
+ label_scale,
99
+ label_color,
100
+ label_thickness,
101
+ cv2.LINE_AA,
102
+ )
103
+ return frame
@@ -0,0 +1,206 @@
1
+ Metadata-Version: 2.4
2
+ Name: neuromeka_grasp
3
+ Version: 0.1.0
4
+ Summary: Python client utilities for Neuromeka GraspGen Docker server
5
+ Author-email: Neuromeka <support@neuromeka.com>
6
+ License: Proprietary
7
+ Project-URL: Homepage, https://github.com/neuromeka
8
+ Requires-Python: >=3.9
9
+ Description-Content-Type: text/markdown
10
+ Requires-Dist: numpy>=1.23
11
+ Requires-Dist: pyzmq>=25.0.0
12
+ Requires-Dist: opencv-python-headless>=4.5.0
13
+
14
+ # neuromeka_grasp
15
+
16
+ Python client utilities for the GraspGen Docker server (ZeroMQ + pickle RPC).
17
+ Import and use it directly with `from neuromeka_grasp import GraspGeneration`.
18
+
19
+ ## Installation
20
+ ```bash
21
+ pip install neuromeka_grasp
22
+ ```
23
+
24
+ ## Requirements
25
+ - Python >= 3.9
26
+ - A running GraspGen Docker server from `neuromeka-repo/nrmk_graspgen`
27
+ - Server port is fixed to `5558`; the client defaults to this port and you usually do not need to change it
28
+ - Dependencies: numpy, pyzmq, opencv-python-headless
29
+
30
+ ## Supported grippers (model presets)
31
+ These names map to server-side checkpoints (see `nrmk_graspgen/modules/graspgen/app.py`):
32
+ - `dh_ag_160_95` (default)
33
+ - `robotiq_2f_140`
34
+ - `franka_panda`
35
+ - `single_suction_cup_30mm`
36
+
37
+ You can also pass a `.yml` config path via `model` if the server has other checkpoints.
38
+
39
+ ## Quick start (depth + mask)
40
+ ```python
41
+ from neuromeka_grasp import GraspGeneration
42
+
43
+ client = GraspGeneration(hostname="localhost")
44
+ client.init(fx, fy, cx, cy, model="dh_ag_160_95")
45
+
46
+ resp = client.inference_from_depth_mask(
47
+ depth=depth_np,
48
+ mask=mask_np,
49
+ fx=fx, fy=fy, cx=cx, cy=cy,
50
+ enable_orientation_projection=True,
51
+ approach_axis_source="local_normal_avg",
52
+ enable_roll_projection=True,
53
+ target_roll_direction="auto",
54
+ enable_translation_projection=True,
55
+ translation_axis=-1,
56
+ desired_offset=0.03,
57
+ )
58
+
59
+ if resp["result"] == "SUCCESS":
60
+ grasps = resp["data"]["grasps"]
61
+ scores = resp["data"]["scores"]
62
+ ```
63
+
64
+
65
+ ## Point cloud example
66
+ ```python
67
+ import numpy as np
68
+ from neuromeka_grasp import GraspGeneration
69
+
70
+ client = GraspGeneration(hostname="localhost")
71
+ client.init(fx, fy, cx, cy, model="dh_ag_160_95")
72
+
73
+ object_pc = np.asarray(object_points, dtype=np.float32) # (N, 3)
74
+ scene_pc = np.asarray(scene_points, dtype=np.float32) # (M, 3)
75
+
76
+ resp = client.inference_from_point_cloud(
77
+ object_pc=object_pc,
78
+ scene_pc=scene_pc,
79
+ collision_check=True,
80
+ grasp_threshold=0.8,
81
+ num_grasps=200,
82
+ )
83
+ ```
84
+
85
+ ## Visualization (optional)
86
+ ```python
87
+ from neuromeka_grasp import draw_grasps_overlay
88
+
89
+ overlay = draw_grasps_overlay(
90
+ rgb=rgb_image,
91
+ grasps=grasps, # (K, 4, 4)
92
+ scores=scores, # (K,)
93
+ fx=fx, fy=fy, cx=cx, cy=cy,
94
+ )
95
+ ```
96
+
97
+ ## API overview
98
+ - `GraspGeneration`
99
+ - `init(fx, fy, cx, cy, model="dh_ag_160_95")`
100
+ - `inference_from_point_cloud(...)`
101
+ - `inference_from_depth_mask(...)`
102
+ - `point_cloud_outlier_removal(obj_pc, threshold=0.014, k=20)`
103
+ - `reset()`, `close()`
104
+ - `draw_grasps_overlay(...)`: Project predicted grasps onto an RGB image.
105
+
106
+ ## init() reference
107
+ **Signature**
108
+ `init(fx, fy, cx, cy, model="dh_ag_160_95")`
109
+
110
+ **Parameters**
111
+ - `fx`, `fy`: Focal lengths in pixels.
112
+ - `cx`, `cy`: Principal point in pixels.
113
+ - `model`: Gripper preset name (see Supported grippers) or a `.yml` config path.
114
+ Config paths are resolved on the server side, so use a path visible to the server container.
115
+
116
+ **Behavior**
117
+ - Stores intrinsics in the client for future calls.
118
+ - Initializes the server model; response includes `gripper_name`, `model_path`, and `intrinsics`.
119
+
120
+ ## inference_from_depth_mask() reference
121
+ **Signature**
122
+ `inference_from_depth_mask(depth, mask, fx=None, fy=None, cx=None, cy=None, ...)`
123
+
124
+ **Core inputs**
125
+ - `depth`: (H, W) depth image. Units are scaled by `depth_scale`.
126
+ - `mask`: (H, W) or (H, W, 1) segmentation mask. If `target_object_id` is set, only that label is used.
127
+ - `fx`, `fy`, `cx`, `cy`: Camera intrinsics. If omitted, values set by `init()` are used.
128
+
129
+ **Object/scene point clouds**
130
+ - `target_object_id`: If set, selects a single object label from `mask`.
131
+ - `depth_scale`: Multiplier applied to `depth` before projection.
132
+ - `max_object_points`: Random downsample limit for object points (None disables).
133
+ - `max_scene_points`: Random downsample limit for scene points (used only when collision_check=True).
134
+
135
+ **Sampling and filtering**
136
+ - `collision_check`: If True, runs collision filtering with the scene point cloud.
137
+ - `collision_threshold`: Distance (meters) for collision filtering.
138
+ - `grasp_threshold`: Minimum score threshold for grasps.
139
+ - `num_grasps`: Number of grasps to sample before filtering.
140
+ - `return_topk`: If True, also returns top-k grasps (server-side).
141
+ - `topk_num_grasps`: Number of top grasps to keep when `return_topk=True`.
142
+ - `min_grasps`: Minimum number of grasps to return (server-side).
143
+ - `max_tries`: Max sampling attempts (server-side).
144
+
145
+ **Orientation projection**
146
+ - `enable_orientation_projection`: Enable orientation constraint projection.
147
+ - `approach_axis_index`: Gripper-frame approach axis index (0=x, 1=y, 2=z).
148
+ - `approach_axis_source`: `"local_normal"`, `"local_normal_avg"`, or `"global_pca"`.
149
+ - `normal_avg_k`: KNN count for averaging normals in `"local_normal_avg"`.
150
+ - `target_approach_direction`: Target approach direction (3-vector, camera frame).
151
+ - `step_strength_orient`: Orientation correction strength in [0, 1].
152
+ - `use_surface_normal_for_approach`: Legacy switch; affects default behavior when `approach_axis_source` is not set.
153
+ - `contact_proxy_offset`: Offset (meters) along approach axis for contact proxy.
154
+
155
+ **Translation projection**
156
+ - `enable_translation_projection`: Enable translation constraint projection.
157
+ - `translation_axis`: PCA axis index (`-1` = principal axis).
158
+ - `desired_offset`: Target offset (meters) along PCA axis.
159
+ - `step_strength_trans`: Translation correction strength in [0, 1].
160
+
161
+ **Roll projection**
162
+ - `enable_roll_projection`: Enable roll projection around approach axis.
163
+ - `jaw_axis_index`: Gripper-frame jaw axis index (0=x, 1=y, 2=z).
164
+ - `target_roll_direction`: Target jaw direction (3-vector, camera frame) or `"auto"`.
165
+ - `roll_target_axis`: PCA axis index when `target_roll_direction="auto"`.
166
+ - `roll_target_axis_sign`: Optional sign override (+1 or -1) for the PCA axis.
167
+ - `roll_use_local_tangent`: Use local tangent (2D PCA) for `"auto"` roll targets (server support required).
168
+ - `step_strength_roll`: Roll correction strength in [0, 1].
169
+
170
+ **PCA sign and debug**
171
+ - `pca_axis_sign_ref`: PCA sign reference (`"camera_x"`, `"roll_ref"`, or a 3-vector).
172
+ - `projection_debug`: Enable projection debug logging on the server.
173
+
174
+ ## Axis convention notes
175
+ - The rotation matrix columns represent the gripper-frame axes expressed in the camera frame.
176
+ - `approach_axis_index` and `jaw_axis_index` refer to those gripper-frame axes.
177
+ - When using `"global_pca"`, some server builds allow `approach_axis_index` or `roll_target_axis`
178
+ values `-10/-11/-12` to select the opposite direction of PCA axis 0/1/2.
179
+
180
+ ## PCA sign note
181
+ PCA eigenvectors can flip sign across runs. Use `pca_axis_sign_ref` to keep sign consistent
182
+ and avoid ambiguous translation/roll directions.
183
+
184
+ ## Inputs and shapes
185
+ - `depth`: HxW float or uint depth image.
186
+ - `mask`: HxW or HxWx1 integer mask. If `target_object_id` is set, only that label is used.
187
+ - `depth_scale`: Multiplier applied to `depth` before projection.
188
+ - `object_pc`, `scene_pc`: (N, 3) and (M, 3) float32 point clouds in camera frame.
189
+ - `grasps`: (K, 4, 4) homogeneous poses, `scores`: (K,) in [0, 1].
190
+
191
+ ## Projection parameters (summary)
192
+ - Orientation: `enable_orientation_projection`, `approach_axis_index`,
193
+ `approach_axis_source` (`local_normal`, `local_normal_avg`, `global_pca`),
194
+ `normal_avg_k`, `contact_proxy_offset`
195
+ - Translation: `enable_translation_projection`, `translation_axis`, `desired_offset`
196
+ - Roll: `enable_roll_projection`, `target_roll_direction` (supports `"auto"`),
197
+ `roll_target_axis`, `roll_target_axis_sign`, `roll_use_local_tangent`
198
+ - Sign control and debug: `pca_axis_sign_ref`, `projection_debug`
199
+
200
+ For detailed parameter semantics, refer to the server README.
201
+
202
+ ## Notes
203
+ - `inference_from_depth_mask` requires intrinsics set via `init()` or passed directly.
204
+ - Server port is fixed at `5558`; the client default matches this and typically should not be changed.
205
+ - `PickleClient` is synchronous and blocking; set timeouts or retry logic on the server side if needed.
206
+ - Pickle is not secure against untrusted sources. Use only in trusted environments.
@@ -0,0 +1,10 @@
1
+ README.md
2
+ pyproject.toml
3
+ neuromeka_grasp/__init__.py
4
+ neuromeka_grasp/client.py
5
+ neuromeka_grasp/overlay.py
6
+ neuromeka_grasp.egg-info/PKG-INFO
7
+ neuromeka_grasp.egg-info/SOURCES.txt
8
+ neuromeka_grasp.egg-info/dependency_links.txt
9
+ neuromeka_grasp.egg-info/requires.txt
10
+ neuromeka_grasp.egg-info/top_level.txt
@@ -0,0 +1,3 @@
1
+ numpy>=1.23
2
+ pyzmq>=25.0.0
3
+ opencv-python-headless>=4.5.0
@@ -0,0 +1 @@
1
+ neuromeka_grasp
@@ -0,0 +1,20 @@
1
+ [build-system]
2
+ requires = ["setuptools>=61.0", "wheel"]
3
+ build-backend = "setuptools.build_meta"
4
+
5
+ [project]
6
+ name = "neuromeka_grasp"
7
+ version = "0.1.0"
8
+ description = "Python client utilities for Neuromeka GraspGen Docker server"
9
+ authors = [{ name = "Neuromeka", email = "support@neuromeka.com" }]
10
+ readme = "README.md"
11
+ requires-python = ">=3.9"
12
+ dependencies = [
13
+ "numpy>=1.23",
14
+ "pyzmq>=25.0.0",
15
+ "opencv-python-headless>=4.5.0",
16
+ ]
17
+ license = { text = "Proprietary" }
18
+
19
+ [project.urls]
20
+ Homepage = "https://github.com/neuromeka"
@@ -0,0 +1,4 @@
1
+ [egg_info]
2
+ tag_build =
3
+ tag_date = 0
4
+