superlab 0.1.14 → 0.1.16

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (74) hide show
  1. package/README.md +22 -13
  2. package/README.zh-CN.md +22 -13
  3. package/bin/superlab.cjs +38 -0
  4. package/lib/auto_contracts.cjs +7 -3
  5. package/lib/auto_runner.cjs +33 -52
  6. package/lib/auto_state.cjs +27 -21
  7. package/lib/context.cjs +15 -0
  8. package/lib/i18n.cjs +150 -78
  9. package/lib/install.cjs +14 -10
  10. package/package-assets/claude/commands/lab-auto.md +13 -0
  11. package/package-assets/claude/commands/lab-data.md +10 -0
  12. package/package-assets/claude/commands/lab-framing.md +10 -0
  13. package/package-assets/claude/commands/lab-idea.md +10 -0
  14. package/package-assets/claude/commands/lab-iterate.md +10 -0
  15. package/package-assets/claude/commands/lab-report.md +10 -0
  16. package/package-assets/claude/commands/lab-review.md +10 -0
  17. package/package-assets/claude/commands/lab-run.md +10 -0
  18. package/package-assets/claude/commands/lab-spec.md +10 -0
  19. package/package-assets/claude/commands/lab-write.md +10 -0
  20. package/package-assets/claude/commands/lab.md +32 -27
  21. package/package-assets/codex/prompts/lab-write.md +1 -1
  22. package/package-assets/codex/prompts/lab.md +1 -0
  23. package/package-assets/shared/lab/.managed/templates/final-report.md +12 -0
  24. package/package-assets/shared/lab/.managed/templates/main-tables.md +37 -0
  25. package/package-assets/shared/lab/config/workflow.json +3 -1
  26. package/package-assets/shared/lab/context/auto-outcome.md +3 -0
  27. package/package-assets/shared/skills/lab/SKILL.md +6 -2
  28. package/package-assets/shared/skills/lab/references/paper-writing/abstract.md +7 -1
  29. package/package-assets/shared/skills/lab/references/paper-writing/examples/abstract/template-a.md +21 -0
  30. package/package-assets/shared/skills/lab/references/paper-writing/examples/abstract/template-b.md +34 -0
  31. package/package-assets/shared/skills/lab/references/paper-writing/examples/abstract/template-c.md +28 -0
  32. package/package-assets/shared/skills/lab/references/paper-writing/examples/abstract-examples.md +13 -0
  33. package/package-assets/shared/skills/lab/references/paper-writing/examples/index.md +21 -0
  34. package/package-assets/shared/skills/lab/references/paper-writing/examples/introduction/novel-task-challenge-decomposition.md +18 -0
  35. package/package-assets/shared/skills/lab/references/paper-writing/examples/introduction/pipeline-not-recommended-abstract-only.md +30 -0
  36. package/package-assets/shared/skills/lab/references/paper-writing/examples/introduction/pipeline-version-1-one-contribution-multi-advantages.md +30 -0
  37. package/package-assets/shared/skills/lab/references/paper-writing/examples/introduction/pipeline-version-2-two-contributions.md +34 -0
  38. package/package-assets/shared/skills/lab/references/paper-writing/examples/introduction/pipeline-version-3-new-module-on-existing-pipeline.md +18 -0
  39. package/package-assets/shared/skills/lab/references/paper-writing/examples/introduction/pipeline-version-4-observation-driven.md +16 -0
  40. package/package-assets/shared/skills/lab/references/paper-writing/examples/introduction/technical-challenge-version-1-existing-task.md +32 -0
  41. package/package-assets/shared/skills/lab/references/paper-writing/examples/introduction/technical-challenge-version-2-existing-task-insight-backed-by-traditional.md +33 -0
  42. package/package-assets/shared/skills/lab/references/paper-writing/examples/introduction/technical-challenge-version-3-novel-task.md +21 -0
  43. package/package-assets/shared/skills/lab/references/paper-writing/examples/introduction/version-1-task-then-application.md +14 -0
  44. package/package-assets/shared/skills/lab/references/paper-writing/examples/introduction/version-2-application-first.md +10 -0
  45. package/package-assets/shared/skills/lab/references/paper-writing/examples/introduction/version-3-general-to-specific-setting.md +14 -0
  46. package/package-assets/shared/skills/lab/references/paper-writing/examples/introduction/version-4-open-with-challenge.md +20 -0
  47. package/package-assets/shared/skills/lab/references/paper-writing/examples/introduction-examples.md +25 -0
  48. package/package-assets/shared/skills/lab/references/paper-writing/examples/method/example-of-the-three-elements.md +67 -0
  49. package/package-assets/shared/skills/lab/references/paper-writing/examples/method/method-writing-common-issues-note.md +10 -0
  50. package/package-assets/shared/skills/lab/references/paper-writing/examples/method/module-design-instant-ngp.md +55 -0
  51. package/package-assets/shared/skills/lab/references/paper-writing/examples/method/module-motivation-patterns.md +15 -0
  52. package/package-assets/shared/skills/lab/references/paper-writing/examples/method/module-triad-neural-body.md +19 -0
  53. package/package-assets/shared/skills/lab/references/paper-writing/examples/method/neural-body-annotated-figure-text.md +66 -0
  54. package/package-assets/shared/skills/lab/references/paper-writing/examples/method/overview-template.md +30 -0
  55. package/package-assets/shared/skills/lab/references/paper-writing/examples/method/pre-writing-questions.md +17 -0
  56. package/package-assets/shared/skills/lab/references/paper-writing/examples/method/section-skeleton.md +9 -0
  57. package/package-assets/shared/skills/lab/references/paper-writing/examples/method-examples.md +24 -0
  58. package/package-assets/shared/skills/lab/references/paper-writing/introduction.md +7 -1
  59. package/package-assets/shared/skills/lab/references/paper-writing/method.md +6 -2
  60. package/package-assets/shared/skills/lab/references/paper-writing-integration.md +26 -0
  61. package/package-assets/shared/skills/lab/stages/auto.md +9 -1
  62. package/package-assets/shared/skills/lab/stages/report.md +5 -1
  63. package/package-assets/shared/skills/lab/stages/write.md +16 -1
  64. package/package.json +1 -1
  65. package/package-assets/claude/commands/lab/auto.md +0 -14
  66. package/package-assets/claude/commands/lab/data.md +0 -11
  67. package/package-assets/claude/commands/lab/framing.md +0 -11
  68. package/package-assets/claude/commands/lab/idea.md +0 -11
  69. package/package-assets/claude/commands/lab/iterate.md +0 -11
  70. package/package-assets/claude/commands/lab/report.md +0 -11
  71. package/package-assets/claude/commands/lab/review.md +0 -11
  72. package/package-assets/claude/commands/lab/run.md +0 -11
  73. package/package-assets/claude/commands/lab/spec.md +0 -11
  74. package/package-assets/claude/commands/lab/write.md +0 -11
@@ -0,0 +1,21 @@
1
+ # Technical Challenge Version 3 (Novel Task)
2
+
3
+
4
+ `Version 3: For novel tasks without direct methods, define the challenge directly and decompose it by requirement/challenge points.`
5
+
6
+ ```latex
7
+ % To achieve xx goal, several requirements/challenges must be satisfied.
8
+ %% Example: In this work, our goal is to build a model that captures such object intrinsics from a single image. This problem is challenging for three reasons.
9
+
10
+ % Describe point 1
11
+ %% Example: First, we only have a single image. This makes our work fundamentally different from existing works on 3D-aware image generation models [8, 9, 27, 28], which typically require a large dataset of thousands of instances for training. In comparison, the single image contains at most a few dozen instances, making the inference problem highly under-constrained.
12
+
13
+ % Describe point 2
14
+ %% Example: Second, these already limited instances may vary significantly in pixel values. This is because they have different poses and illumination conditions, but neither of these factors are annotated or known. We also cannot resort to existing tools for pose estimation based on structure from motion, such as COLMAP [35], because the appearance variations violate the assumptions of epipolar geometry.
15
+
16
+ % Describe point 3
17
+ %% Example: Finally, the object intrinsics we aim to infer are probabilistic, not deterministic: no two roses in the natural world are identical, and we want to capture a distribution of their geometry, texture, and material to exploit the underlying multi-view information.
18
+ ```
19
+
20
+ See also:
21
+ 1. `references/examples/introduction/novel-task-challenge-decomposition.md`
@@ -0,0 +1,14 @@
1
+ # Introduction Version 1: Task First, Then Application
2
+
3
+
4
+ `Version 1: If the task is relatively niche, introduce the task first, then introduce applications.`
5
+
6
+ ```latex
7
+ % Introduce Task (if the task is very familiar, this part can be skipped)
8
+ %% Example: Object pose estimation aims to estimate object's orientation and translation relative to a canonical frame from a single image.
9
+ [xxx task] targets at recovering/reconstructing/estimating [xxx output] from [xxx input].
10
+
11
+ % Introduce Application
12
+ %% Example: Accurate pose estimation is essential for a variety of applications such as augmented reality, autonomous driving and robotic manipulation.
13
+ [xxx task] has a variety of applications such as [xxx], [xxx], and [xxx].
14
+ ```
@@ -0,0 +1,10 @@
1
+ # Introduction Version 2: Application First
2
+
3
+
4
+ `Version 2: If the task is already familiar to most readers, introduce applications directly.`
5
+
6
+ ```latex
7
+ % Introduce Application
8
+ %% Example: Accurate pose estimation is essential for a variety of applications such as augmented reality, autonomous driving and robotic manipulation.
9
+ [xxx task] has a variety of applications such as [xxx], [xxx], and [xxx].
10
+ ```
@@ -0,0 +1,14 @@
1
+ # Introduction Version 3: General Application -> Specific Setting
2
+
3
+
4
+ `Version 3: Introduce applications of the general task first, then introduce the specific task setting. (Personally recommended when the setting is relatively new.)`
5
+
6
+ ```latex
7
+ % Introduce applications of the general task
8
+ %% Example: Accurate pose estimation is essential for a variety of applications such as augmented reality, autonomous driving and robotic manipulation.
9
+ [xxx task] has a variety of applications such as [xxx], [xxx], and [xxx].
10
+
11
+ % Introduce the specific task setting
12
+ %% Example: This paper focuses on the specific setting of recovering the 6DoF pose of an object, i.e., rotation and translation in 3D, from a single RGB image of that object.
13
+ This paper focuses on the specific setting of recovering/reconstructing/estimating [xxx output] from [xxx input].
14
+ ```
@@ -0,0 +1,20 @@
1
+ # Introduction Version 4: Open with Application and Challenge
2
+
3
+
4
+ `Version 4: If the task is familiar, introduce applications directly and expose the target technical challenge in the opening paragraph via previous methods.`
5
+
6
+ Expert notes (faithful translation):
7
+
8
+ 1. It is often good if the opening paragraph already states what we want to solve.
9
+ 2. But this style requires suitable conditions and is less common.
10
+ 3. Usually, several prior-method paragraphs are still needed before the target challenge becomes clear.
11
+
12
+ ```latex
13
+ % Introduce Application
14
+ %% Example 1: Reconstructing 3D scenes from multi-view images is a cornerstone of many applications such as augmented reality, robotics, and autonomous driving.
15
+ %% Example 2: Instance segmentation is the cornerstone of many computer vision tasks, such as video analysis, autonomous driving, and robotic grasping, which require both accuracy and efficiency.
16
+
17
+ % Use previous methods to expose the target technical challenge
18
+ %% Example 1: Given input images, traditional methods [43, 44, 59] generally estimate the depth map for each image based on the multi-view stereo (MVS) algorithms and then fuse estimated depth maps into 3D models. Although these methods achieve successful reconstruction in most cases, they have difficulty in handling low-textured regions, e.g., floors and walls of indoor scenes, due to the unreliable stereo matching in these regions.
19
+ %% Example 2: Most of the state-of-the-art instance segmentation methods [18, 27, 5, 19] perform pixel-wise segmentation within a bounding box given by an object detector [36], which may be sensitive to the inaccurate bounding box. Moreover, representing an object shape as dense binary pixels generally results in costly post-processing.
20
+ ```
@@ -0,0 +1,25 @@
1
+ # Introduction Examples Index
2
+
3
+ All introduction example cites should point to the local files below.
4
+
5
+ ## A. Task and Application Versions
6
+
7
+ 1. Version 1: `references/examples/introduction/version-1-task-then-application.md`
8
+ 2. Version 2: `references/examples/introduction/version-2-application-first.md`
9
+ 3. Version 3: `references/examples/introduction/version-3-general-to-specific-setting.md`
10
+ 4. Version 4: `references/examples/introduction/version-4-open-with-challenge.md`
11
+
12
+ ## B. Technical Challenge Versions
13
+
14
+ 1. Version 1 (existing task): `references/examples/introduction/technical-challenge-version-1-existing-task.md`
15
+ 2. Version 2 (existing task + traditional insight backing): `references/examples/introduction/technical-challenge-version-2-existing-task-insight-backed-by-traditional.md`
16
+ 3. Version 3 (novel task): `references/examples/introduction/technical-challenge-version-3-novel-task.md`
17
+ 4. Novel-task decomposition examples: `references/examples/introduction/novel-task-challenge-decomposition.md`
18
+
19
+ ## C. Pipeline-Introduction Versions
20
+
21
+ 1. Version 1: `references/examples/introduction/pipeline-version-1-one-contribution-multi-advantages.md`
22
+ 2. Version 2: `references/examples/introduction/pipeline-version-2-two-contributions.md`
23
+ 3. Version 3: `references/examples/introduction/pipeline-version-3-new-module-on-existing-pipeline.md`
24
+ 4. Version 4: `references/examples/introduction/pipeline-version-4-observation-driven.md`
25
+ 5. Not recommended pattern: `references/examples/introduction/pipeline-not-recommended-abstract-only.md`
@@ -0,0 +1,67 @@
1
+ # Example of the Three Elements
2
+
3
+ This example uses `%` comments as annotations.
4
+ Each `% ...` annotation explains the paragraph(s) immediately below it.
5
+
6
+ ```latex
7
+ \begin{quote}
8
+ \textbf{Annotation rule.} In this example, each line starting with \% labels the role of the paragraph(s) directly below it.
9
+ \end{quote}
10
+
11
+ \begin{itemize}
12
+ \item Module design (data structure)
13
+ \item Motivation of this module
14
+ \item Technical advantages of this module
15
+ \item Module design (forward process)
16
+ \end{itemize}
17
+
18
+ \subsection{3.1. Structured latent codes}
19
+
20
+ % Module design: introduce the module's data structure
21
+ To control the spatial locations of latent codes with the human pose, we anchor these latent codes to a deformable human body model (SMPL) [38]. SMPL is a skinned vertex-based model, which is defined as a function of shape parameters, pose parameters, and a rigid transformation relative to the SMPL coordinate system. The function outputs a posed 3D mesh with 6890 vertices. Specifically, we define a set of latent codes \( Z = \{z_1, z_2, ..., z_{6890}\} \) on vertices of the SMPL model. For the frame \( t \), SMPL parameters \( S_t \) are estimated from the multi-view images \( \{I_t^c \mid c = 1, ..., N_c\} \) using [26]. The spatial locations of the latent codes are then transformed based on the human pose \( S_t \) for the density and color regression. Figure 3 shows an example. The dimension of latent code \( z \) is set to 16 in our experiments.
22
+
23
+ % Technical advantages of this module
24
+ Similar to the local implicit representations [25, 5, 18], the latent codes are used with a neural network to represent the local geometry and appearance of a human. Anchoring these codes to a deformable model enables us to represent a dynamic human. With the dynamic human representation, we establish a latent variable model that maps the same set of latent codes to the implicit fields of density and color at different frames, which naturally integrates observations at different frames.
25
+
26
+ \subsection{3.2. Code diffusion}
27
+
28
+ % Motivation of this module
29
+ Figure 3(a) shows the process of code diffusion. The implicit fields assign the density and color to each point in the 3D space, which requires us to query the latent codes at continuous 3D locations. This can be achieved with the trilinear interpolation. However, since the structured latent codes are relatively sparse in the 3D space, directly interpolating the latent codes leads to zero vectors at most 3D points. To solve this problem, we diffuse the latent codes defined on the surface to nearby 3D space.
30
+
31
+ % Module design: introduce module design by describing the module forward process
32
+ Inspired by [65, 56, 49], we choose the SparseConvNet [21] to efficiently process the structured latent codes, whose architecture is described in Table 1. Specifically, based on the SMPL parameters, we compute the 3D bounding box of the human and divide the box into small voxels with voxel size of \( 5mm \times 5mm \times 5mm \). The latent code of a non-empty voxel is the mean of latent codes of SMPL vertices inside this voxel. SparseConvNet utilizes 3D sparse convolutions to process the input volume and output latent code volumes with \( 2\times, 4\times, 8\times, 16\times \) downsampled sizes. With the convolution and downsampling, the input codes are diffused to nearby space. Following [56], for any point in 3D space, we interpolate the latent codes from multi-scale code volumes of network layers 5, 9, 13, 17, and concatenate them into the final latent code. Since the code diffusion should not be affected by the human position and orientation in the world coordinate system, we transform the code locations to the SMPL coordinate system.
33
+
34
+ For any point \( \mathbf{x} \) in 3D space, we query its latent code from the latent code volume. Specifically, the point \( \mathbf{x} \) is first transformed to the SMPL coordinate system, which aligns the point and the latent code volume in 3D space. Then, the latent code is computed using the trilinear interpolation. For the SMPL parameters \( S_t \), we denote the latent code at point \( \mathbf{x} \) as \( \psi(\mathbf{x}, Z, S_t) \). The code vector is passed into MLP networks to predict the density and color for point \( \mathbf{x} \).
35
+
36
+ \subsection{3.3. Density and color regression}
37
+
38
+ Figure 3(b) overviews the regression of density and color for any point in 3D space. The density and color fields are represented by MLP networks. Details of network architectures are described in the supplementary material.
39
+
40
+ % Module design: introduce module design by describing the module forward process
41
+ \textbf{Density model.} For the frame \( t \), the volume density at point \( \mathbf{x} \) is predicted as a function of only the latent code \( \psi(\mathbf{x}, Z, S_t) \), which is defined as:
42
+
43
+ \[
44
+ \sigma_t(\mathbf{x}) = M_{\sigma}(\psi(\mathbf{x}, Z, S_t)),
45
+ \tag{1}
46
+ \]
47
+
48
+ where \( M_{\sigma} \) represents an MLP network with four layers.
49
+
50
+ % Module design: introduce the module's data structure
51
+ \textbf{Color model.} Similar to [37, 44], we take both the latent code \( \psi(\mathbf{x}, Z, S_t) \) and the viewing direction \( \mathbf{d} \) as input for the color regression. To model the location-dependent incident light, the color model also takes the spatial location \( \mathbf{x} \) as input. We observe that temporally-varying factors affect the human appearance, such as secondary lighting and self-shadowing. Inspired by the auto-decoder [48], we assign a latent embedding \( \ell_t \) for each video frame \( t \) to encode the temporally-varying factors.
52
+
53
+ % Module design: introduce module design by describing the module forward process
54
+ Specifically, for the frame \( t \), the color at \( \mathbf{x} \) is predicted as a function of the latent code \( \psi(\mathbf{x}, Z, S_t) \), the viewing direction \( \mathbf{d} \), the spatial location \( \mathbf{x} \), and the latent embedding \( \ell_t \). Following [51, 44], we apply the positional encoding to both the viewing direction \( \mathbf{d} \) and the spatial location \( \mathbf{x} \), which enables better learning of high frequency functions. The color model at frame \( t \) is defined as:
55
+
56
+ \[
57
+ c_t(\mathbf{x}) = M_c(\psi(\mathbf{x}, Z, S_t), \gamma_d(\mathbf{d}), \gamma_x(\mathbf{x}), \ell_t),
58
+ \tag{2}
59
+ \]
60
+
61
+ where \( M_c \) represents an MLP network with two layers, and \( \gamma_d \) and \( \gamma_x \) are positional encoding functions for viewing direction and spatial location, respectively. We set the dimension of \( \ell_t \) to 128 in experiments.
62
+
63
+ \subsection{3.4. Volume rendering}
64
+
65
+ % Module design: introduce module design by describing the module forward process
66
+ Given a viewpoint, we utilize the classical volume rendering techniques to render the Neural Body into a 2D image. The pixel colors are estimated via the volume rendering integral equation [27] that accumulates volume densities and colors along the corresponding camera ray. In practice, the integral is approximated using numerical quadrature [41, 44]. Given a pixel, we first compute its camera ray \( \mathbf{r} \) using the camera parameters and sample \( N_k \) points \( \{\mathbf{x}_k\}_{k=1}^{N_k} \) along camera ray \( \mathbf{r} \) between near and far bounds. The scene bounds are estimated based on the SMPL model. Then, Neural Body predicts volume densities and colors at these points. For the video frame \( t \), the rendered color \( \hat{C}_t(\mathbf{r}) \) ...
67
+ ```
@@ -0,0 +1,10 @@
1
+ # Method Writing Common Issues (Reference Note)
2
+
3
+ Original source mentioned in your notes:
4
+
5
+ 1. `Method writing common issues (PDF in your source notes)`
6
+
7
+ Usage recommendation:
8
+
9
+ 1. Use this reference as a troubleshooting checklist after drafting Method.
10
+ 2. Prioritize unclear motivation, broken flow, missing implementation details, and inconsistent terms.
@@ -0,0 +1,55 @@
1
+ # Module Design Example
2
+
3
+ This example uses `%` comments as annotations.
4
+ Each `% ...` annotation explains the paragraph(s) immediately below it.
5
+
6
+ ```latex
7
+ \begin{quote}
8
+ \textbf{Annotation rule.} In this example, each line starting with \% labels the role of the paragraph(s) directly below it.
9
+ \end{quote}
10
+
11
+ \begin{itemize}
12
+ \item Motivation of this module
13
+ \item Module design (data structure)
14
+ \item Module design (forward process)
15
+ \end{itemize}
16
+
17
+ \section{3 \quad MULTIRESOLUTION HASH ENCODING}
18
+
19
+ % Motivation of this module
20
+ Given a fully connected neural network \(m(y;\Phi)\), we are interested in an encoding of its inputs \(y=\operatorname{enc}(x;\theta)\) that improves the approximation quality and training speed across a wide range of applications without incurring a notable performance overhead.
21
+
22
+ % Module design: introduce the module's data structure
23
+ Our neural network not only has trainable weight parameters \(\Phi\), but also trainable encoding parameters \(\theta\). These are arranged into \(L\) levels, each containing up to \(T\) feature vectors with dimensionality \(F\). Typical values for these hyperparameters are shown in Table 1. Figure 3 illustrates the steps performed in our multiresolution hash encoding. Each level (two of which are shown as red and blue in the figure) is independent and conceptually stores feature vectors at the vertices of a grid, the resolution of which is chosen to be a geometric progression between the coarsest and finest resolutions \([N_{\min},N_{\max}]\):
24
+
25
+ \[
26
+ N_l := \left\lfloor N_{\min}\cdot b^l \right\rfloor, \tag{2}
27
+ \]
28
+
29
+ \[
30
+ b := \exp\!\left(\frac{\ln N_{\max}-\ln N_{\min}}{L-1}\right). \tag{3}
31
+ \]
32
+
33
+ \(N_{\max}\) is chosen to match the finest detail in the training data. Due to the large number of levels \(L\), the growth factor is usually small. Our use cases have \(b\in[1.26,2]\).
34
+
35
+ % Module design: introduce module design by describing the module forward process
36
+ Consider a single level \(l\). The input coordinate \(x\in\mathbb{R}^d\) is scaled by that level's grid resolution before rounding down and up:
37
+ \[
38
+ \lfloor x_l \rfloor := \lfloor x\cdot N_l \rfloor,\quad
39
+ \lceil x_l \rceil := \lceil x\cdot N_l \rceil.
40
+ \]
41
+
42
+ \(\lfloor x_l \rfloor\) and \(\lceil x_l \rceil\) span a voxel with \(2^d\) integer vertices in \(\mathbb{Z}^d\). We map each corner to an entry in the level's respective feature vector array, which has fixed size of at most \(T\). For coarser levels where a dense grid requires fewer than \(T\) parameters, i.e. \((N_l+1)^d \le T\), this mapping is 1:1. At finer levels, we use a hash function \(h:\mathbb{Z}^d\rightarrow\mathbb{Z}_T\) to index into the array, effectively treating it as a hash table, although there is no explicit collision handling. We rely instead on the gradient-based optimization to store appropriate sparse detail in the array, and the subsequent neural network \(m(y;\Phi)\) for collision resolution. The number of trainable encoding parameters \(\theta\) is therefore \(O(T)\) and bounded by \(T\cdot L\cdot F\), which in our case is always \(T\cdot16\cdot2\) (Table 1).
43
+
44
+ We use a spatial hash function [Teschner et al. 2003] of the form
45
+ \[
46
+ h(x)=\left(\bigoplus_{i=1}^{d} x_i\pi_i\right)\bmod T, \tag{4}
47
+ \]
48
+ where \(\oplus\) denotes the bit-wise XOR operation and \(\pi_i\) are unique, large prime numbers. Effectively, this formula XORs the results of a per-dimension linear congruential (pseudo-random) permutation [Lehmer 1951], \emph{decorrelating} the effect of the dimensions on the hashed value. Notably, to achieve (pseudo-)independence, only \(d-1\) of the \(d\) dimensions must be permuted, so we choose \(\pi_1:=1\) for better cache coherence, \(\pi_2=2{,}654{,}435{,}761\), and \(\pi_3=805{,}459{,}861\).
49
+
50
+ Lastly, the feature vectors at each corner are \(d\)-linearly interpolated according to the relative position of \(x\) within its hypercube, i.e. the interpolation weight is \(w_l := x_l-\lfloor x_l \rfloor\).
51
+
52
+ Recall that this process takes place independently for each of the \(L\) levels. The interpolated feature vectors of each level, as well as auxiliary inputs \(\xi\in\mathbb{R}^E\) (such as the encoded view direction and textures in neural radiance caching), are concatenated to produce \(y\in\mathbb{R}^{LF+E}\), which is the encoded input \(\operatorname{enc}(x;\theta)\) to the MLP \(m(y;\Phi)\).
53
+
54
+ \textbf{Performance vs. quality.} Choosing the hash table size \(T\) provides a trade-off between performance, memory and quality. Higher values of \(T\) result in higher quality and lower performance. The memory ...
55
+ ```
@@ -0,0 +1,15 @@
1
+ # Module Motivation Writing Patterns
2
+
3
+
4
+ `Module motivation is usually problem-driven: because a problem exists, we design xx to solve it.`
5
+
6
+ Typical opening sentences:
7
+
8
+ 1. `A remaining problem/challenge is ...`
9
+ 2. `However, we ...`
10
+ 3. `Previous methods have difficulty in ...`
11
+
12
+ Usage note:
13
+
14
+ 1. State the specific failure before introducing the module.
15
+ 2. Keep motivation independent from implementation details.
@@ -0,0 +1,19 @@
1
+ # Module Triad Example (Neural Body)
2
+
3
+ `Use Neural Body to understand the three elements of a module: design, motivation, and technical advantages.`
4
+
5
+ Local source references:
6
+
7
+ 1. Annotated figure showing motivation/design/advantages split.
8
+ 3. Text-converted annotation notes: `references/examples/method/neural-body-annotated-figure-text.md`
9
+
10
+ Triad mapping template:
11
+
12
+ 1. Module design: what representation/network is built and how forward process runs.
13
+ 2. Motivation: what unresolved challenge requires this module.
14
+ 3. Technical advantages: why this module performs better than alternatives.
15
+
16
+ Direct usage:
17
+
18
+ 1. Read `neural-body-annotated-figure-text.md` to map each paragraph to one triad element.
19
+ 2. Rebuild your own Method subsection with the same triad order.
@@ -0,0 +1,66 @@
1
+ # Neural Body Annotated Figure (Text Conversion)
2
+
3
+ This file converts the annotated Neural Body figure into reusable writing notes.
4
+
5
+ ## Purpose
6
+
7
+ Use this mapping to understand how one Method section can explicitly separate:
8
+
9
+ 1. Module motivation
10
+ 2. Module design (data structure)
11
+ 3. Module design (forward process)
12
+ 4. Technical advantages
13
+
14
+ ## Block-by-Block Mapping
15
+
16
+ ### Section 3.1: Structured Latent Codes
17
+
18
+ 1. **Module design (data structure)**
19
+ - The paragraph defines structured latent codes anchored to the deformable human model (SMPL).
20
+ - It explains what is constructed (latent codes + their anchor positions + frame-dependent transformation by pose).
21
+
22
+ 2. **Technical advantages**
23
+ - The paragraph explains why this design works better: dynamic-human representation and cross-frame integration of observations.
24
+ - It highlights why anchoring codes to deformable geometry is beneficial.
25
+
26
+ ### Section 3.2: Code Diffusion
27
+
28
+ 1. **Motivation of this module**
29
+ - The paragraph states the remaining problem: direct interpolation of sparse structured codes leads to near-zero vectors at many 3D points.
30
+ - This motivates diffusion from surface codes to nearby 3D space.
31
+
32
+ 2. **Module design (forward process)**
33
+ - The paragraph explains the execution pipeline: build sparse latent volumes, run sparse convolutions, interpolate latent codes at query points, and feed codes to prediction networks.
34
+ - This is a canonical input -> steps -> output module description.
35
+
36
+ ### Section 3.3: Density and Color Regression
37
+
38
+ 1. **Module design (forward process) for density model**
39
+ - The density paragraph defines how density is regressed from latent code and frame condition.
40
+
41
+ 2. **Module design (data structure) for color model**
42
+ - The color paragraph introduces required inputs/embeddings (latent code, view direction, spatial location, temporal embedding).
43
+
44
+ 3. **Module design (forward process) for color model**
45
+ - The next paragraph describes how those inputs are encoded and passed into the color MLP for final color prediction.
46
+
47
+ ### Section 3.4: Volume Rendering
48
+
49
+ 1. **Module design (forward process)**
50
+ - The paragraph describes ray sampling and volume integration to render image outputs from predicted density/color fields.
51
+
52
+ ## Reusable Writing Pattern from This Figure
53
+
54
+ For each module subsection, follow this order:
55
+
56
+ 1. `Motivation`: state unresolved challenge and technical reason.
57
+ 2. `Design-1`: define structure/representation/network.
58
+ 3. `Design-2`: describe forward process in execution order.
59
+ 4. `Advantage`: explain why this module improves over alternatives.
60
+
61
+ ## Suggested Paragraph Starters
62
+
63
+ 1. Motivation: `A remaining challenge is ...`
64
+ 2. Data structure design: `We represent ... with ...`
65
+ 3. Forward process: `Given [input], we first ... then ... finally ...`
66
+ 4. Technical advantage: `Compared with previous methods, this design ... because ...`
@@ -0,0 +1,30 @@
1
+ # Method Overview Template
2
+
3
+
4
+ `Overview usually includes setting, core contribution, optional figure pointer, and subsection map.`
5
+
6
+ ```latex
7
+ % Overview
8
+ % One or two sentences for setting
9
+ %% Example 1: Given a sparse multi-view video of a performer, our task is to generate a free-viewpoint video of the performer.
10
+ %% Example 2: Given an image, the task of pose estimation is to detect objects and estimate their orientations and translations in the 3D space.
11
+
12
+ % One or two sentences for core contribution
13
+ %% Example 1: We build upon prior work for static scenes [46], to which we add the notion of time, and estimate 3D motion by explicitly modeling forward and backward scene flow as dense 3D vector fields.
14
+ %% Example 2: Inspired by [21, 25], we perform object segmentation by deforming an initial contour to match object boundary.
15
+ %% Example 3: Inspired by recent methods [29, 30, 36], we estimate the object pose using a two-stage pipeline: we first detect 2D object keypoints using CNNs and then compute 6D pose parameters using the PnP algorithm. Our innovation is in a new representation for 2D object keypoints as well as a modified PnP algorithm for pose estimation.
16
+
17
+ % If pipeline/framework is novel, point to figure
18
+ %% Example: The overview of the proposed model is illustrated in Figure 3.
19
+
20
+ % Explain what Section 3.1 covers
21
+ %% Example 1: Neural Body starts from a set of structured latent codes attached to the surface of a deformable human model (Section 3.1).
22
+ %% Example 2: In this section, we first describe how to model 3D scenes with MLP maps (Section 3.1).
23
+
24
+ % Explain what Section 3.2 covers
25
+ %% Example 1: The latent code at any location around the surface can be obtained with a code diffusion process (Section 3.2) and then decoded to density and color values by neural networks (Section 3.3).
26
+ %% Example 2: Then, Section 3.2 discusses how to represent volumetric videos with dynamic MLP maps.
27
+
28
+ % Explain what Section 3.3 covers
29
+ %% Example 3: Finally, we introduce some strategies to speed up the rendering process (Section 3.3).
30
+ ```
@@ -0,0 +1,17 @@
1
+ # Method Pre-Writing Questions
2
+
3
+
4
+ `Before writing Method, answer: (1) what modules exist, and (2) for each module, what is its workflow, why is it needed, and why does it work.`
5
+
6
+ ```text
7
+ Questions:
8
+ (1) What modules are in the method?
9
+ (2) For each module, answer three questions:
10
+ - What is this module's workflow?
11
+ - Why do we need this module?
12
+ - Why does this module work?
13
+ ```
14
+
15
+ Recommended action:
16
+
17
+ 1. Organize answers in a mind map or table before writing paragraphs.
@@ -0,0 +1,9 @@
1
+ # Method Section Skeleton
2
+
3
+ ```latex
4
+ \section{Method}
5
+ % Overview
6
+ % Section 3.1
7
+ % Section 3.2
8
+ % Section 3.3
9
+ ```
@@ -0,0 +1,24 @@
1
+ # Method Examples Index
2
+
3
+ All method example cites should point to the local files below.
4
+
5
+ ## A. Planning and Writing Workflow
6
+
7
+ 1. Pre-writing questions: `references/examples/method/pre-writing-questions.md`
8
+
9
+ ## B. Module Triad and Module-Level Writing
10
+
11
+ 1. Module triad (Neural Body): `references/examples/method/module-triad-neural-body.md`
12
+ 2. Neural Body figure text conversion: `references/examples/method/neural-body-annotated-figure-text.md`
13
+ 3. Module design (Instant-NGP): `references/examples/method/module-design-instant-ngp.md`
14
+ 4. Module motivation patterns: `references/examples/method/module-motivation-patterns.md`
15
+
16
+ ## C. Section-Level Templates
17
+
18
+ 1. Method section skeleton: `references/examples/method/section-skeleton.md`
19
+ 2. Overview template: `references/examples/method/overview-template.md`
20
+ 3. Example of the three elements: `references/examples/method/example-of-the-three-elements.md`
21
+
22
+ ## D. Clarity and Troubleshooting
23
+
24
+ 1. Common issues note: `references/examples/method/method-writing-common-issues-note.md`
@@ -337,7 +337,13 @@ Why not recommended (writing structure warning):
337
337
 
338
338
  ## Usage Note
339
339
 
340
- This vendored guide keeps the introduction patterns but omits the original example-bank files. Treat the templates and sentence skeletons here as the canonical reference inside `superlab`.
340
+ This vendored guide should be paired with the local introduction example bank:
341
+
342
+ - `references/examples/index.md`
343
+ - `references/examples/introduction-examples.md`
344
+ - 1-2 matching files under `references/examples/introduction/`
345
+
346
+ Use the introduction patterns and sentence skeletons here first, then use the example files to choose a concrete paragraph flow and narrative progression. Reuse structure, not wording.
341
347
 
342
348
  ## Quick Quality Checklist
343
349
 
@@ -62,7 +62,11 @@ Definition:
62
62
 
63
63
  ### Example of the Three Elements
64
64
 
65
- Use the structure above directly. The original example-bank files are not bundled in `superlab`.
65
+ Use the structure above directly, then compare it with the local method example bank:
66
+
67
+ - `references/examples/index.md`
68
+ - `references/examples/method-examples.md`
69
+ - 1-2 matching files under `references/examples/method/`
66
70
 
67
71
  ## Method Content Decomposition
68
72
 
@@ -163,4 +167,4 @@ flowchart TB
163
167
 
164
168
  ## Usage Note
165
169
 
166
- This vendored guide keeps the method-writing patterns and checklists but does not bundle the upstream example-bank files. Treat the subsection structures, sentence skeletons, and review questions in this file as the direct working reference.
170
+ This vendored guide should be paired with the local method example bank listed above. Use this file for subsection structure, module-writing checks, and review questions first; then use the example files to choose a concrete presentation pattern. Reuse structure, not wording.
@@ -2,11 +2,13 @@
2
2
 
3
3
  `/lab:write` vendors the paper-writing references directly into `skills/lab/references/paper-writing/`.
4
4
  The goal is to keep the upstream writing discipline while removing brittle runtime dependence on a separately installed skill.
5
+ It also vendors the upstream example bank for the sections that currently have curated examples, so drafting can reuse concrete structure rather than only abstract guidance.
5
6
 
6
7
  ## Role Split
7
8
 
8
9
  - `lab` controls stage boundaries, evidence discipline, and durable artifacts.
9
10
  - the vendored paper-writing references control section structure, paragraph logic, and reviewer-facing polish.
11
+ - the vendored example bank controls example-driven structure selection for `abstract`, `introduction`, and `method`.
10
12
  - `/lab:write` links evidence-backed research outputs to paper-ready text.
11
13
 
12
14
  ## Required Shared Constraints
@@ -33,6 +35,30 @@ The goal is to keep the upstream writing discipline while removing brittle runti
33
35
  - `paper-writing/paper-review.md`
34
36
  - `paper-writing/does-my-writing-flow-source.md`
35
37
 
38
+ ## Vendored Example Bank
39
+
40
+ - `paper-writing/examples/index.md`
41
+ - `paper-writing/examples/abstract-examples.md`
42
+ - `paper-writing/examples/introduction-examples.md`
43
+ - `paper-writing/examples/method-examples.md`
44
+ - `paper-writing/examples/abstract/*`
45
+ - `paper-writing/examples/introduction/*`
46
+ - `paper-writing/examples/method/*`
47
+
48
+ ## Write-Time Rule
49
+
50
+ For `abstract`, `introduction`, and `method`:
51
+
52
+ 1. read the section guide first
53
+ 2. read the matching examples index
54
+ 3. read 1-2 concrete example files that match the intended structure
55
+ 4. reuse structure and sentence logic without copying wording
56
+
57
+ For `related work`, `experiments`, and `conclusion`:
58
+
59
+ 1. use the section guide directly
60
+ 2. do not invent non-existent example-bank files
61
+
36
62
  ## Attribution
37
63
 
38
64
  These references are adapted from:
@@ -38,6 +38,7 @@
38
38
  - Treat `/lab:auto` as an orchestration layer, not a replacement for existing `/lab:*` stages.
39
39
  - Treat `.lab/context/eval-protocol.md` as the source of truth for paper-facing metrics, metric glossary, table plan, gates, and structured experiment ladders.
40
40
  - Treat the evaluation protocol as source-backed, not imagination-backed: metric definitions, baseline behavior, comparison implementations, and deviations must come from recorded sources before they are used in gates or promotions.
41
+ - Treat paper-template selection as an explicit write-time gate, not as a silent fallback, when the loop is about to create `.tex` deliverables for the first time.
41
42
  - The contract must declare `Autonomy level` and `Approval status`, and execution starts only when approval is explicitly set to `approved`.
42
43
  - The contract must also declare a concrete terminal goal:
43
44
  - `rounds`
@@ -65,7 +66,7 @@
65
66
  - Enforce stage contracts, not just exit codes:
66
67
  - `run` and `iterate` must change persistent outputs under `results_root`
67
68
  - `review` must update canonical review context
68
- - `report` must produce `<deliverables_root>/report.md`
69
+ - `report` must produce `<deliverables_root>/report.md` and `<deliverables_root>/main-tables.md`
69
70
  - `write` must produce LaTeX output under `<deliverables_root>/paper/`
70
71
  - Treat promotion as incomplete unless it writes back to `data-decisions.md`, `decisions.md`, `state.md`, and `session-brief.md`.
71
72
  - Do not stop or promote on the basis of a metric or comparison claim whose source-backed definition is missing from the approved evaluation protocol.
@@ -106,4 +107,11 @@
106
107
  - allowed modifications
107
108
  - Then ask at most one clarifying question if a blocking field is still missing.
108
109
  - If `.lab/config/workflow.json` sets the workflow language to Chinese, write summaries, options, checklist items, task labels, and progress updates in Chinese unless a file path, code identifier, or literal metric name must remain unchanged.
110
+ - When the loop reaches `report`, apply the same workflow-language rule to `report.md` and the managed `main-tables.md` artifact.
111
+ - When the loop is about to enter `write` and `paper_template_root` is empty:
112
+ - if `paper_template_decision` is `unconfirmed`, ask one explicit question: continue with the default scaffold or attach a template directory first
113
+ - if the user chooses the default scaffold, persist `paper_template_decision: default-scaffold`
114
+ - if the user chooses a template, stop the loop and route to `superlab paper attach-template --path <dir>`
115
+ - if the current write target is a final manuscript export, `paper_template_decision` is `default-scaffold`, and `paper_template_final_reminder_acknowledged` is `false`, ask one final reminder question before finalizing
116
+ - if the user confirms staying on the default scaffold at that final reminder, persist `paper_template_final_reminder_acknowledged: true`
109
117
  - While the real experiment process is still alive, emit only a progress update and keep waiting. Do not present a terminal summary for that rung until the process exits or the rung hits an explicit stop boundary.
@@ -3,8 +3,10 @@
3
3
  ## Required Output
4
4
 
5
5
  - method overview
6
+ - selected metrics summary
6
7
  - experiment setup
7
8
  - validated main results
9
+ - managed main tables artifact under `<deliverables_root>/main-tables.md`
8
10
  - ablations
9
11
  - failed attempts
10
12
  - limitations
@@ -31,13 +33,15 @@
31
33
  - Tie every major claim to recorded summaries or iteration artifacts.
32
34
  - Structure tables, gates, and main claims against the approved evaluation protocol.
33
35
  - Do not restate metric definitions, baseline behavior, or comparison implementations from memory; use the approved evaluation protocol and its recorded sources.
36
+ - Carry the approved `Primary metrics`, `Secondary metrics`, and `Required terminal evidence` into both the report and the managed main-tables artifact.
34
37
  - If the report depends on a deviation from an original metric or implementation, state that deviation explicitly instead of smoothing it over.
38
+ - If `.lab/config/workflow.json` sets the workflow language to Chinese, write `report.md` and `<deliverables_root>/main-tables.md` in Chinese unless a file path, code identifier, or literal metric name must remain unchanged.
35
39
  - Prefer conservative interpretation over marketing language.
36
40
  - Leave a clear handoff path into `/lab:write` with evidence links that section drafts can cite.
37
41
 
38
42
  ## Interaction Contract
39
43
 
40
- - Start with a concise summary of the campaign outcome, strongest supported claim, and biggest reporting risk.
44
+ - Start with a concise summary of the campaign outcome, the selected primary and secondary metrics, the strongest supported claim, and the biggest reporting risk.
41
45
  - If a missing assumption would change report interpretation, ask one clarifying question at a time.
42
46
  - If there are multiple defensible report framings, present 2-3 approaches with trade-offs and recommend the most evidence-faithful framing before writing.
43
47
  - Keep an approval gate when the reporting frame would materially affect what the paper later claims.
@@ -13,6 +13,8 @@
13
13
 
14
14
  - `.lab/config/workflow.json`
15
15
  - `paper_template_root` from `.lab/config/workflow.json`
16
+ - `paper_template_decision` from `.lab/config/workflow.json`
17
+ - `paper_template_final_reminder_acknowledged` from `.lab/config/workflow.json`
16
18
 
17
19
  ## Context Read Set
18
20
 
@@ -38,6 +40,12 @@ Load the exact vendored file that matches the current target:
38
40
  - experiments -> `skills/lab/references/paper-writing/experiments.md`
39
41
  - conclusion -> `skills/lab/references/paper-writing/conclusion.md`
40
42
 
43
+ When the current target has a bundled example bank, load the examples index and 1-2 concrete example files that match the intended structure:
44
+
45
+ - abstract -> `skills/lab/references/paper-writing/examples/index.md`, `skills/lab/references/paper-writing/examples/abstract-examples.md`, plus one matching file under `skills/lab/references/paper-writing/examples/abstract/`
46
+ - introduction -> `skills/lab/references/paper-writing/examples/index.md`, `skills/lab/references/paper-writing/examples/introduction-examples.md`, plus 1-2 matching files under `skills/lab/references/paper-writing/examples/introduction/`
47
+ - method -> `skills/lab/references/paper-writing/examples/index.md`, `skills/lab/references/paper-writing/examples/method-examples.md`, plus 1-2 matching files under `skills/lab/references/paper-writing/examples/method/`
48
+
41
49
  Run these on every round:
42
50
 
43
51
  - section flow check -> `skills/lab/references/paper-writing/does-my-writing-flow-source.md`
@@ -49,8 +57,15 @@ Run these on every round:
49
57
  - LaTeX is the required manuscript output format.
50
58
  - If `paper_template_root` is configured, inspect that template directory before drafting and align the manuscript structure to it.
51
59
  - Treat attached template directories as user-owned and potentially modified. Do not rewrite template files unless the user explicitly asks.
52
- - If no paper template is configured, use the default LaTeX scaffold under the deliverable paper directory.
60
+ - If no paper template is configured and `paper_template_decision` is `unconfirmed`, ask one explicit question before the first `.tex` drafting round: continue with the default LaTeX scaffold, or attach a template directory first.
61
+ - If the user chooses the default scaffold, persist that choice in `.lab/config/workflow.json` by setting `paper_template_decision` to `default-scaffold`.
62
+ - If the user chooses to attach a template, stop the drafting loop and route to `superlab paper attach-template --path <dir>` instead of silently falling back.
63
+ - If `paper_template_decision` is `default-scaffold`, use the managed default LaTeX scaffold under the deliverable paper directory.
64
+ - If the current round is a final manuscript export or final-draft pass, `paper_template_root` is still empty, `paper_template_decision` is `default-scaffold`, and `paper_template_final_reminder_acknowledged` is `false`, ask one final reminder question about switching to a template before finalizing.
65
+ - If the user confirms staying on the default scaffold at that final reminder, persist `paper_template_final_reminder_acknowledged: true`.
53
66
  - Load only the current section guide. Do not load every section guide at once.
67
+ - Reuse example-bank structure, paragraph roles, and sentence logic when examples are bundled, but never copy wording verbatim.
68
+ - Treat example cites and example file names as writing references, not as evidence for the current paper.
54
69
  - Build a compact mini-outline before prose.
55
70
  - For each subsection, explicitly include motivation, design, and technical advantage when applicable.
56
71
  - Avoid a writing style that reads like incremental patching of a naive baseline.