superlab 0.1.26 → 0.1.28

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (50) hide show
  1. package/README.md +3 -0
  2. package/README.zh-CN.md +3 -0
  3. package/bin/superlab.cjs +11 -0
  4. package/lib/auto_contracts.cjs +1 -1
  5. package/lib/auto_runner.cjs +1 -1
  6. package/lib/context.cjs +30 -198
  7. package/lib/i18n.cjs +229 -19
  8. package/lib/lab_write_contract.json +8 -0
  9. package/package-assets/claude/commands/lab-idea.md +1 -1
  10. package/package-assets/claude/commands/lab-write.md +1 -1
  11. package/package-assets/claude/commands/lab.md +4 -3
  12. package/package-assets/codex/prompts/lab-idea.md +1 -1
  13. package/package-assets/codex/prompts/lab-write.md +1 -1
  14. package/package-assets/codex/prompts/lab.md +4 -3
  15. package/package-assets/shared/lab/.managed/scripts/validate_idea_artifact.py +147 -0
  16. package/package-assets/shared/lab/.managed/scripts/validate_manuscript_delivery.py +50 -4
  17. package/package-assets/shared/lab/.managed/scripts/validate_paper_claims.py +86 -0
  18. package/package-assets/shared/lab/.managed/scripts/validate_paper_plan.py +263 -0
  19. package/package-assets/shared/lab/.managed/scripts/validate_section_draft.py +181 -0
  20. package/package-assets/shared/lab/.managed/templates/idea.md +43 -0
  21. package/package-assets/shared/lab/.managed/templates/paper-plan.md +78 -0
  22. package/package-assets/shared/lab/config/workflow.json +2 -1
  23. package/package-assets/shared/lab/context/auto-mode.md +1 -1
  24. package/package-assets/shared/lab/context/next-action.md +4 -4
  25. package/package-assets/shared/lab/context/session-brief.md +8 -1
  26. package/package-assets/shared/lab/context/summary.md +14 -3
  27. package/package-assets/shared/skills/lab/SKILL.md +17 -16
  28. package/package-assets/shared/skills/lab/references/paper-writing/examples/abstract/template-b.md +2 -2
  29. package/package-assets/shared/skills/lab/references/paper-writing/examples/conclusion/conservative-claim-boundary.md +13 -13
  30. package/package-assets/shared/skills/lab/references/paper-writing/examples/experiments/main-results-and-ablation-latex.md +18 -17
  31. package/package-assets/shared/skills/lab/references/paper-writing/examples/experiments-examples.md +1 -1
  32. package/package-assets/shared/skills/lab/references/paper-writing/examples/index.md +1 -1
  33. package/package-assets/shared/skills/lab/references/paper-writing/examples/introduction/pipeline-version-1-one-contribution-multi-advantages.md +3 -3
  34. package/package-assets/shared/skills/lab/references/paper-writing/examples/introduction/pipeline-version-2-two-contributions.md +1 -1
  35. package/package-assets/shared/skills/lab/references/paper-writing/examples/method/annotated-figure-to-text.md +66 -0
  36. package/package-assets/shared/skills/lab/references/paper-writing/examples/method/example-of-the-three-elements.md +11 -11
  37. package/package-assets/shared/skills/lab/references/paper-writing/examples/method/{module-design-instant-ngp.md → module-design-multiresolution-encoding.md} +1 -1
  38. package/package-assets/shared/skills/lab/references/paper-writing/examples/method/{module-triad-neural-body.md → module-triad-anchored-representation.md} +4 -4
  39. package/package-assets/shared/skills/lab/references/paper-writing/examples/method/overview-template.md +4 -4
  40. package/package-assets/shared/skills/lab/references/paper-writing/examples/method/pre-writing-questions.md +4 -3
  41. package/package-assets/shared/skills/lab/references/paper-writing/examples/method-examples.md +4 -4
  42. package/package-assets/shared/skills/lab/references/paper-writing/examples/related-work/closest-prior-gap-template.md +12 -12
  43. package/package-assets/shared/skills/lab/references/paper-writing/examples/related-work/topic-comparison-template.md +2 -2
  44. package/package-assets/shared/skills/lab/stages/auto.md +6 -2
  45. package/package-assets/shared/skills/lab/stages/data.md +0 -1
  46. package/package-assets/shared/skills/lab/stages/framing.md +0 -1
  47. package/package-assets/shared/skills/lab/stages/idea.md +30 -13
  48. package/package-assets/shared/skills/lab/stages/write.md +28 -4
  49. package/package.json +1 -1
  50. package/package-assets/shared/skills/lab/references/paper-writing/examples/method/neural-body-annotated-figure-text.md +0 -66
@@ -12,7 +12,7 @@ locally organized cite targets.
12
12
  5. Introduction technical-challenge files: `references/examples/introduction/technical-challenge-version-1-existing-task.md`, `references/examples/introduction/technical-challenge-version-2-existing-task-insight-backed-by-traditional.md`, `references/examples/introduction/technical-challenge-version-3-novel-task.md`, `references/examples/introduction/novel-task-challenge-decomposition.md`
13
13
  6. Introduction pipeline files: `references/examples/introduction/pipeline-version-1-one-contribution-multi-advantages.md`, `references/examples/introduction/pipeline-version-2-two-contributions.md`, `references/examples/introduction/pipeline-version-3-new-module-on-existing-pipeline.md`, `references/examples/introduction/pipeline-version-4-observation-driven.md`, `references/examples/introduction/pipeline-not-recommended-abstract-only.md`
14
14
  7. Method examples index: `references/examples/method-examples.md`
15
- 8. Method detail files: `references/examples/method/pre-writing-questions.md`, `references/examples/method/module-triad-neural-body.md`, `references/examples/method/neural-body-annotated-figure-text.md`, `references/examples/method/module-design-instant-ngp.md`, `references/examples/method/module-motivation-patterns.md`, `references/examples/method/section-skeleton.md`, `references/examples/method/overview-template.md`, `references/examples/method/example-of-the-three-elements.md`, `references/examples/method/method-writing-common-issues-note.md`
15
+ 8. Method detail files: `references/examples/method/pre-writing-questions.md`, `references/examples/method/module-triad-anchored-representation.md`, `references/examples/method/annotated-figure-to-text.md`, `references/examples/method/module-design-multiresolution-encoding.md`, `references/examples/method/module-motivation-patterns.md`, `references/examples/method/section-skeleton.md`, `references/examples/method/overview-template.md`, `references/examples/method/example-of-the-three-elements.md`, `references/examples/method/method-writing-common-issues-note.md`
16
16
  9. Related work examples index: `references/examples/related-work-examples.md`
17
17
  10. Related work files: `references/examples/related-work/closest-prior-gap-template.md`, `references/examples/related-work/topic-comparison-template.md`
18
18
  11. Experiments examples index: `references/examples/experiments-examples.md`
@@ -5,7 +5,7 @@
5
5
 
6
6
  ```latex
7
7
  % In this paper, we propose a novel framework …
8
- %% Example: In this paper, we introduce a novel implicit neural representation for dynamic humans, named Neural Body, to solve the challenge of novel view synthesis from sparse views.
8
+ %% Example: In this paper, we introduce a structured anchor-based representation, named AnchorField, to solve the challenge of reconstructing a changing target from sparse observations.
9
9
  In this paper, we propose a novel framework/representation, named [method name] for [xxx task].
10
10
 
11
11
  % Teaser for basic idea
@@ -13,11 +13,11 @@ In this paper, we propose a novel framework/representation, named [method name]
13
13
  The basic idea is illustrated in [xxx Figure].
14
14
 
15
15
  % One-sentence key novelty/contribution (very important ability)
16
- %% Example: For the implicit fields at different frames, instead of learning them separately, Neural Body generates them from the same set of latent codes.
16
+ %% Example: Instead of learning each observation-specific field separately, AnchorField generates them from the same set of anchor codes.
17
17
  Our innovation is in [one sentence for key novelty].
18
18
 
19
19
  % Method details
20
- %% Example: Specifically, we anchor a set of latent codes to the vertices of a deformable human model (SMPL in this work), namely that their spatial locations vary with the human pose. To obtain the 3D representation at a frame, we first transform the code locations based on the human pose, which can be reliably estimated from sparse camera views. Then, a network is designed to regress the density and color for any 3D point based on these latent codes. Both the latent codes and the network are jointly learned from images of all video frames during the reconstruction.
20
+ %% Example: Specifically, we anchor a set of latent codes to a reference support structure so that their spatial locations follow the current observation. To obtain the target representation for one condition, we first transform the code locations using the available alignment signal. Then, a prediction network estimates the required outputs at each query point from these codes. Both the anchor codes and the prediction network are jointly learned from all observations.
21
21
  Specifically, [how it works in detail].
22
22
 
23
23
  % Advantage 1
@@ -5,7 +5,7 @@
5
5
 
6
6
  ```latex
7
7
  % In this paper, we propose a novel framework …
8
- %% Example: In this paper, we introduce a novel implicit neural representation for dynamic humans, named Neural Body, to solve the challenge of novel view synthesis from sparse views.
8
+ %% Example: In this paper, we introduce a structured anchor-based representation, named AnchorField, to solve the challenge of reconstructing a changing target from sparse observations.
9
9
  In this paper, we propose a novel framework/representation, named [method name] for [xxx task].
10
10
 
11
11
  % One-sentence key novelty
@@ -0,0 +1,66 @@
1
+ # Annotated Figure (Text Conversion)
2
+
3
+ This file converts an annotated method figure into reusable writing notes.
4
+
5
+ ## Purpose
6
+
7
+ Use this mapping to understand how one Method section can explicitly separate:
8
+
9
+ 1. Module motivation
10
+ 2. Module design (data structure)
11
+ 3. Module design (forward process)
12
+ 4. Technical advantages
13
+
14
+ ## Block-by-Block Mapping
15
+
16
+ ### Section 3.1: Structured Anchor Codes
17
+
18
+ 1. **Module design (data structure)**
19
+ - The paragraph defines structured codes anchored to a reference support structure.
20
+ - It explains what is constructed: latent codes, anchor positions, and the condition-dependent transformation that keeps them aligned with the current instance.
21
+
22
+ 2. **Technical advantages**
23
+ - The paragraph explains why this design works better: it preserves correspondence across observations and keeps the representation aligned with changing geometry.
24
+ - It highlights why anchoring codes to a structured support is beneficial.
25
+
26
+ ### Section 3.2: Code Diffusion
27
+
28
+ 1. **Motivation of this module**
29
+ - The paragraph states the remaining problem: direct interpolation of sparse anchor codes leaves many query locations weakly represented.
30
+ - This motivates propagation from anchor locations to nearby continuous space.
31
+
32
+ 2. **Module design (forward process)**
33
+ - The paragraph explains the execution pipeline: build sparse feature volumes, run sparse convolutions, interpolate codes at query points, and feed those codes to prediction networks.
34
+ - This is a canonical input -> steps -> output module description.
35
+
36
+ ### Section 3.3: Field Regression
37
+
38
+ 1. **Module design (forward process) for the first prediction head**
39
+ - The first paragraph defines how one target quantity is regressed from the latent code and condition inputs.
40
+
41
+ 2. **Module design (data structure) for the second prediction head**
42
+ - The next paragraph introduces required inputs or embeddings such as latent code, query direction, spatial location, or condition tokens.
43
+
44
+ 3. **Module design (forward process) for the second prediction head**
45
+ - The following paragraph describes how those inputs are encoded and passed into the second prediction network for the final estimate.
46
+
47
+ ### Section 3.4: Output Aggregation
48
+
49
+ 1. **Module design (forward process)**
50
+ - The paragraph describes the final aggregation process that converts pointwise predictions into the output seen by the reader or evaluator.
51
+
52
+ ## Reusable Writing Pattern from This Figure
53
+
54
+ For each module subsection, follow this order:
55
+
56
+ 1. `Motivation`: state unresolved challenge and technical reason.
57
+ 2. `Design-1`: define structure/representation/network.
58
+ 3. `Design-2`: describe forward process in execution order.
59
+ 4. `Advantage`: explain why this module improves over alternatives.
60
+
61
+ ## Suggested Paragraph Starters
62
+
63
+ 1. Motivation: `A remaining challenge is ...`
64
+ 2. Data structure design: `We represent ... with ...`
65
+ 3. Forward process: `Given [input], we first ... then ... finally ...`
66
+ 4. Technical advantage: `Compared with previous methods, this design ... because ...`
@@ -18,27 +18,27 @@ Each `% ...` annotation explains the paragraph(s) immediately below it.
18
18
  \subsection{3.1. Structured latent codes}
19
19
 
20
20
  % Module design: introduce the module's data structure
21
- To control the spatial locations of latent codes with the human pose, we anchor these latent codes to a deformable human body model (SMPL) [38]. SMPL is a skinned vertex-based model, which is defined as a function of shape parameters, pose parameters, and a rigid transformation relative to the SMPL coordinate system. The function outputs a posed 3D mesh with 6890 vertices. Specifically, we define a set of latent codes \( Z = \{z_1, z_2, ..., z_{6890}\} \) on vertices of the SMPL model. For the frame \( t \), SMPL parameters \( S_t \) are estimated from the multi-view images \( \{I_t^c \mid c = 1, ..., N_c\} \) using [26]. The spatial locations of the latent codes are then transformed based on the human pose \( S_t \) for the density and color regression. Figure 3 shows an example. The dimension of latent code \( z \) is set to 16 in our experiments.
21
+ To keep latent codes aligned with the current instance, we anchor them to a deformable reference structure [38]. This structure is parameterized by shape variables, condition variables, and a rigid transformation relative to its own coordinate system, and outputs a posed mesh or support geometry. Specifically, we define a set of latent codes \( Z = \{z_1, z_2, ..., z_M\} \) on the support points of that reference structure. For one observation \( t \), the condition parameters \( S_t \) are estimated from the available inputs \( \{I_t^c \mid c = 1, ..., N_c\} \) using an external alignment procedure [26]. The spatial locations of the latent codes are then transformed according to \( S_t \) before the downstream prediction heads consume them. Figure 3 shows an example. The latent-code dimension is set to 16 in this example.
22
22
 
23
23
  % Technical advantages of this module
24
- Similar to the local implicit representations [25, 5, 18], the latent codes are used with a neural network to represent the local geometry and appearance of a human. Anchoring these codes to a deformable model enables us to represent a dynamic human. With the dynamic human representation, we establish a latent variable model that maps the same set of latent codes to the implicit fields of density and color at different frames, which naturally integrates observations at different frames.
24
+ Similar to local implicit representations [25, 5, 18], the latent codes are used with a neural network to represent local structure and attributes. Anchoring these codes to a deformable support lets the model stay aligned with changing geometry instead of relearning a separate representation for every condition. This yields a latent-variable view in which the same anchor codes can explain multiple observations while sharing information across them.
25
25
 
26
26
  \subsection{3.2. Code diffusion}
27
27
 
28
28
  % Motivation of this module
29
- Figure 3(a) shows the process of code diffusion. The implicit fields assign the density and color to each point in the 3D space, which requires us to query the latent codes at continuous 3D locations. This can be achieved with the trilinear interpolation. However, since the structured latent codes are relatively sparse in the 3D space, directly interpolating the latent codes leads to zero vectors at most 3D points. To solve this problem, we diffuse the latent codes defined on the surface to nearby 3D space.
29
+ Figure 3(a) shows the process of code propagation. The target fields assign outputs to each point in the domain, which requires querying the latent codes at continuous locations. This can be achieved with trilinear interpolation. However, because the structured anchor codes are sparse, directly interpolating them leaves many query points weakly represented. To solve this problem, we propagate the codes defined on the support structure to nearby space.
30
30
 
31
31
  % Module design: introduce module design by describing the module forward process
32
- Inspired by [65, 56, 49], we choose the SparseConvNet [21] to efficiently process the structured latent codes, whose architecture is described in Table 1. Specifically, based on the SMPL parameters, we compute the 3D bounding box of the human and divide the box into small voxels with voxel size of \( 5mm \times 5mm \times 5mm \). The latent code of a non-empty voxel is the mean of latent codes of SMPL vertices inside this voxel. SparseConvNet utilizes 3D sparse convolutions to process the input volume and output latent code volumes with \( 2\times, 4\times, 8\times, 16\times \) downsampled sizes. With the convolution and downsampling, the input codes are diffused to nearby space. Following [56], for any point in 3D space, we interpolate the latent codes from multi-scale code volumes of network layers 5, 9, 13, 17, and concatenate them into the final latent code. Since the code diffusion should not be affected by the human position and orientation in the world coordinate system, we transform the code locations to the SMPL coordinate system.
32
+ Inspired by [65, 56, 49], we choose a sparse convolutional backbone [21] to efficiently process the structured latent codes, whose architecture is described in Table 1. Specifically, based on the reference-structure parameters, we compute a 3D bounding box around the active support and divide that box into small voxels with voxel size \( 5mm \times 5mm \times 5mm \). The latent code of a non-empty voxel is the mean of the anchor codes that fall inside it. The sparse backbone applies 3D sparse convolutions to this input volume and outputs latent code volumes at \( 2\times, 4\times, 8\times, 16\times \) downsampled scales. With convolution and downsampling, the input codes are propagated to nearby space. Following [56], for any point in the domain, we interpolate the latent codes from multi-scale code volumes of network layers 5, 9, 13, and 17, and concatenate them into the final latent code. Since propagation should not be affected by global position and orientation, we transform code locations to the reference coordinate system.
33
33
 
34
- For any point \( \mathbf{x} \) in 3D space, we query its latent code from the latent code volume. Specifically, the point \( \mathbf{x} \) is first transformed to the SMPL coordinate system, which aligns the point and the latent code volume in 3D space. Then, the latent code is computed using the trilinear interpolation. For the SMPL parameters \( S_t \), we denote the latent code at point \( \mathbf{x} \) as \( \psi(\mathbf{x}, Z, S_t) \). The code vector is passed into MLP networks to predict the density and color for point \( \mathbf{x} \).
34
+ For any query point \( \mathbf{x} \), we obtain its latent code from the propagated code volume. Specifically, the point \( \mathbf{x} \) is first transformed to the reference coordinate system so that the query and code volume are aligned. Then, the latent code is computed with trilinear interpolation. For condition parameters \( S_t \), we denote the latent code at point \( \mathbf{x} \) by \( \psi(\mathbf{x}, Z, S_t) \). The resulting code vector is passed to prediction networks that estimate the target outputs at \( \mathbf{x} \).
35
35
 
36
36
  \subsection{3.3. Density and color regression}
37
37
 
38
- Figure 3(b) overviews the regression of density and color for any point in 3D space. The density and color fields are represented by MLP networks. Details of network architectures are described in the supplementary material.
38
+ Figure 3(b) overviews the regression of the target outputs for any query point. The corresponding fields are represented by MLP networks. Details of the network architectures are described in the supplementary material.
39
39
 
40
40
  % Module design: introduce module design by describing the module forward process
41
- \textbf{Density model.} For the frame \( t \), the volume density at point \( \mathbf{x} \) is predicted as a function of only the latent code \( \psi(\mathbf{x}, Z, S_t) \), which is defined as:
41
+ \textbf{First prediction head.} For condition \( t \), the first target quantity at point \( \mathbf{x} \) is predicted as a function of only the latent code \( \psi(\mathbf{x}, Z, S_t) \), which is defined as:
42
42
 
43
43
  \[
44
44
  \sigma_t(\mathbf{x}) = M_{\sigma}(\psi(\mathbf{x}, Z, S_t)),
@@ -48,20 +48,20 @@ Figure 3(b) overviews the regression of density and color for any point in 3D sp
48
48
  where \( M_{\sigma} \) represents an MLP network with four layers.
49
49
 
50
50
  % Module design: introduce the module's data structure
51
- \textbf{Color model.} Similar to [37, 44], we take both the latent code \( \psi(\mathbf{x}, Z, S_t) \) and the viewing direction \( \mathbf{d} \) as input for the color regression. To model the location-dependent incident light, the color model also takes the spatial location \( \mathbf{x} \) as input. We observe that temporally-varying factors affect the human appearance, such as secondary lighting and self-shadowing. Inspired by the auto-decoder [48], we assign a latent embedding \( \ell_t \) for each video frame \( t \) to encode the temporally-varying factors.
51
+ \textbf{Second prediction head.} Similar to [37, 44], we take both the latent code \( \psi(\mathbf{x}, Z, S_t) \) and an observation-dependent direction \( \mathbf{d} \) as input for the second target. To model location-dependent effects, this head also takes the spatial location \( \mathbf{x} \) as input. We further observe that condition-specific factors can change the target output. Inspired by the auto-decoder [48], we assign a latent embedding \( \ell_t \) for each condition \( t \) to encode those varying factors.
52
52
 
53
53
  % Module design: introduce module design by describing the module forward process
54
- Specifically, for the frame \( t \), the color at \( \mathbf{x} \) is predicted as a function of the latent code \( \psi(\mathbf{x}, Z, S_t) \), the viewing direction \( \mathbf{d} \), the spatial location \( \mathbf{x} \), and the latent embedding \( \ell_t \). Following [51, 44], we apply the positional encoding to both the viewing direction \( \mathbf{d} \) and the spatial location \( \mathbf{x} \), which enables better learning of high frequency functions. The color model at frame \( t \) is defined as:
54
+ Specifically, for condition \( t \), the second target at \( \mathbf{x} \) is predicted as a function of the latent code \( \psi(\mathbf{x}, Z, S_t) \), the observation direction \( \mathbf{d} \), the spatial location \( \mathbf{x} \), and the latent embedding \( \ell_t \). Following [51, 44], we apply positional encoding to both \( \mathbf{d} \) and \( \mathbf{x} \), which enables better learning of high-frequency functions. The second prediction head at condition \( t \) is defined as:
55
55
 
56
56
  \[
57
57
  c_t(\mathbf{x}) = M_c(\psi(\mathbf{x}, Z, S_t), \gamma_d(\mathbf{d}), \gamma_x(\mathbf{x}), \ell_t),
58
58
  \tag{2}
59
59
  \]
60
60
 
61
- where \( M_c \) represents an MLP network with two layers, and \( \gamma_d \) and \( \gamma_x \) are positional encoding functions for viewing direction and spatial location, respectively. We set the dimension of \( \ell_t \) to 128 in experiments.
61
+ where \( M_c \) represents an MLP network with two layers, and \( \gamma_d \) and \( \gamma_x \) are positional encoding functions for direction and spatial location, respectively. We set the dimension of \( \ell_t \) to 128 in this example.
62
62
 
63
63
  \subsection{3.4. Volume rendering}
64
64
 
65
65
  % Module design: introduce module design by describing the module forward process
66
- Given a viewpoint, we utilize the classical volume rendering techniques to render the Neural Body into a 2D image. The pixel colors are estimated via the volume rendering integral equation [27] that accumulates volume densities and colors along the corresponding camera ray. In practice, the integral is approximated using numerical quadrature [41, 44]. Given a pixel, we first compute its camera ray \( \mathbf{r} \) using the camera parameters and sample \( N_k \) points \( \{\mathbf{x}_k\}_{k=1}^{N_k} \) along camera ray \( \mathbf{r} \) between near and far bounds. The scene bounds are estimated based on the SMPL model. Then, Neural Body predicts volume densities and colors at these points. For the video frame \( t \), the rendered color \( \hat{C}_t(\mathbf{r}) \) ...
66
+ Given an observation direction, we utilize a standard aggregation procedure to render the predicted field into a 2D output. Pixel values are estimated via a volume-integration equation [27] that accumulates the predicted quantities along the corresponding ray. In practice, the integral is approximated using numerical quadrature [41, 44]. Given a pixel, we first compute its ray \( \mathbf{r} \) using the camera parameters and sample \( N_k \) points \( \{\mathbf{x}_k\}_{k=1}^{N_k} \) along that ray between near and far bounds. The scene bounds are estimated from the reference structure. Then, the model predicts the target outputs at these points. For condition \( t \), the rendered value \( \hat{C}_t(\mathbf{r}) \) ...
67
67
  ```
@@ -1,4 +1,4 @@
1
- # Module Design Example
1
+ # Module Design Example (Multiresolution Encoding)
2
2
 
3
3
  This example uses `%` comments as annotations.
4
4
  Each `% ...` annotation explains the paragraph(s) immediately below it.
@@ -1,11 +1,11 @@
1
- # Module Triad Example (Neural Body)
1
+ # Module Triad Example (Anchored Representation)
2
2
 
3
- `Use Neural Body to understand the three elements of a module: design, motivation, and technical advantages.`
3
+ `Use this anchored-representation example to understand the three elements of a module: design, motivation, and technical advantages.`
4
4
 
5
5
  Local source references:
6
6
 
7
7
  1. Annotated figure showing motivation/design/advantages split.
8
- 3. Text-converted annotation notes: `references/examples/method/neural-body-annotated-figure-text.md`
8
+ 3. Text-converted annotation notes: `references/examples/method/annotated-figure-to-text.md`
9
9
 
10
10
  Triad mapping template:
11
11
 
@@ -15,5 +15,5 @@ Triad mapping template:
15
15
 
16
16
  Direct usage:
17
17
 
18
- 1. Read `neural-body-annotated-figure-text.md` to map each paragraph to one triad element.
18
+ 1. Read `annotated-figure-to-text.md` to map each paragraph to one triad element.
19
19
  2. Rebuild your own Method subsection with the same triad order.
@@ -18,12 +18,12 @@
18
18
  %% Example: The overview of the proposed model is illustrated in Figure 3.
19
19
 
20
20
  % Explain what Section 3.1 covers
21
- %% Example 1: Neural Body starts from a set of structured latent codes attached to the surface of a deformable human model (Section 3.1).
22
- %% Example 2: In this section, we first describe how to model 3D scenes with MLP maps (Section 3.1).
21
+ %% Example 1: The model starts from a set of structured anchor codes attached to a reference support structure (Section 3.1).
22
+ %% Example 2: In this section, we first describe how to model the target space with learnable field maps (Section 3.1).
23
23
 
24
24
  % Explain what Section 3.2 covers
25
- %% Example 1: The latent code at any location around the surface can be obtained with a code diffusion process (Section 3.2) and then decoded to density and color values by neural networks (Section 3.3).
26
- %% Example 2: Then, Section 3.2 discusses how to represent volumetric videos with dynamic MLP maps.
25
+ %% Example 1: The code at any nearby location is obtained with a propagation process (Section 3.2) and then decoded into the target outputs by prediction networks (Section 3.3).
26
+ %% Example 2: Then, Section 3.2 discusses how to represent changing scenes with dynamic field maps.
27
27
 
28
28
  % Explain what Section 3.3 covers
29
29
  %% Example 3: Finally, we introduce some strategies to speed up the rendering process (Section 3.3).
@@ -1,13 +1,14 @@
1
1
  # Method Pre-Writing Questions
2
2
 
3
3
 
4
- `Before writing Method, answer: (1) what modules exist, and (2) for each module, what is its workflow, why is it needed, and why does it work.`
4
+ `Before writing Method, answer: (1) what modules exist, and (2) for each module, what is its role, what flow does it participate in, why is it needed, and why does it work.`
5
5
 
6
6
  ```text
7
7
  Questions:
8
8
  (1) What modules are in the method?
9
- (2) For each module, answer three questions:
10
- - What is this module's workflow?
9
+ (2) For each module, answer four questions:
10
+ - What role does this module play?
11
+ - What flow or interface does this module participate in?
11
12
  - Why do we need this module?
12
13
  - Why does this module work?
13
14
  ```
@@ -2,15 +2,15 @@
2
2
 
3
3
  All method example cites should point to the local files below.
4
4
 
5
- ## A. Planning and Writing Workflow
5
+ ## A. Planning and Writing Sequence
6
6
 
7
7
  1. Pre-writing questions: `references/examples/method/pre-writing-questions.md`
8
8
 
9
9
  ## B. Module Triad and Module-Level Writing
10
10
 
11
- 1. Module triad (Neural Body): `references/examples/method/module-triad-neural-body.md`
12
- 2. Neural Body figure text conversion: `references/examples/method/neural-body-annotated-figure-text.md`
13
- 3. Module design (Instant-NGP): `references/examples/method/module-design-instant-ngp.md`
11
+ 1. Module triad (anchored representation): `references/examples/method/module-triad-anchored-representation.md`
12
+ 2. Annotated figure text conversion: `references/examples/method/annotated-figure-to-text.md`
13
+ 3. Module design (multiresolution encoding): `references/examples/method/module-design-multiresolution-encoding.md`
14
14
  4. Module motivation patterns: `references/examples/method/module-motivation-patterns.md`
15
15
 
16
16
  ## C. Section-Level Templates
@@ -5,16 +5,16 @@ the paragraph needs to explain the gap precisely.
5
5
 
6
6
  ```tex
7
7
  \paragraph{Closest prior work.}
8
- The closest prior line improves uplift ranking by sharpening the treatment-aware
9
- scoring objective rather than by changing the final decision rule
10
- itself~\cite{lee2021uplift,wang2022counterfactual}. These methods are strong
11
- references because they already optimize for ranking quality and outperform
12
- simple pointwise baselines on standard benchmarks. However, they still assume
13
- that the learned score can be used directly as the final ranking signal across
14
- datasets with different calibration behavior. That assumption is limiting in our
15
- setting, where the raw score often contains useful relative information but is
16
- not always aligned with the final protocol-level target. Our method keeps the
17
- ranking-focused backbone but changes the final stage by inserting a post-hoc
18
- calibration module, so the method can preserve the stronger internal signal
19
- while adapting the exposed score to the benchmark-specific decision scale.
8
+ The closest prior line improves score quality by refining the intermediate
9
+ representation rather than by changing the final prediction rule
10
+ itself~\cite{closestprior2021,anchor2022}. These methods are strong references
11
+ because they already improve ranking or ordering quality over simple pointwise
12
+ baselines on standard benchmarks. However, they still assume that the learned
13
+ score can be exposed directly as the final decision signal across settings with
14
+ different calibration behavior. That assumption is limiting in our setting,
15
+ where the internal score often carries useful relative information but is not
16
+ always aligned with the final evaluation target. Our method keeps the stronger
17
+ intermediate backbone but changes the last stage by inserting a lightweight
18
+ adjustment module, so the method can preserve the better internal signal while
19
+ adapting the exposed score to the evaluation scale used at test time.
20
20
  ```
@@ -12,8 +12,8 @@ known relationships among samples or treatment groups. However, they typically
12
12
  assume that the structured module alone is sufficient to produce well-calibrated
13
13
  scores at evaluation time. In our setting, that assumption is too strong: the
14
14
  raw ranking signal remains useful, but it is not always aligned with the final
15
- decision scale required by the frozen protocol. We therefore keep the
16
- structure-aware backbone but add a separate calibration stage, which lets us
15
+ decision scale required by the fixed evaluation setup. We therefore keep the
16
+ structure-aware backbone but add a separate adjustment stage, which lets us
17
17
  reuse the stronger signal while correcting its final score mapping.
18
18
  ```
19
19
 
@@ -56,7 +56,7 @@
56
56
  - Default allowed stages are `run`, `iterate`, `review`, and `report`. Only include `write` when framing is already approved and manuscript drafting is within scope.
57
57
  - Do not automatically change the research mission, paper-facing framing, or core claims.
58
58
  - You may add exploratory datasets, benchmarks, and comparison methods inside the exploration envelope.
59
- - You may promote exploratory additions to the primary package only when the contract's promotion policy is satisfied and the promotion is written back into `data-decisions.md`, `decisions.md`, `state.md`, and `session-brief.md`.
59
+ - You may promote exploratory additions to the primary package only when the contract's promotion policy is satisfied and the promotion is written back into `data-decisions.md`, `decisions.md`, `state.md`, and `workflow-state.md`.
60
60
  - Poll long-running commands until they finish, hit a timeout, or hit a stop condition.
61
61
  - Keep a poll-based waiting loop instead of sleeping blindly.
62
62
  - Do not treat a short watcher such as `sleep 30`, a one-shot `pgrep`, or a single `metrics.json` probe as the rung command when the real experiment is still running.
@@ -74,7 +74,7 @@
74
74
  - `review` must update canonical review context
75
75
  - `report` must produce `<deliverables_root>/report.md` and `<deliverables_root>/main-tables.md`
76
76
  - `write` must produce LaTeX output under `<deliverables_root>/paper/`
77
- - Treat promotion as incomplete unless it writes back to `data-decisions.md`, `decisions.md`, `state.md`, and `session-brief.md`.
77
+ - Treat promotion as incomplete unless it writes back to `data-decisions.md`, `decisions.md`, `state.md`, and `workflow-state.md`.
78
78
  - Do not stop or promote on the basis of a metric or comparison claim whose source-backed definition is missing from the approved evaluation protocol.
79
79
  - Before each rung and before each success, stop, or promotion decision, re-check the generic academic-risk questions: setting semantics, visibility/leakage, anchor or label policy, scale comparability, metric validity, comparison validity, statistical validity, claim boundary, and integrity self-check.
80
80
  - Before each success, stop, or promotion decision, also re-check the anomaly policy: whether anomaly signals fired, whether simpler explanations were ruled out, whether a cross-check was performed, and whether the current interpretation is still the narrowest supported one.
@@ -137,4 +137,8 @@
137
137
  - if the user chooses a template, stop the loop and route to `superlab paper attach-template --path <dir>`
138
138
  - if the current write target is a final manuscript export, `paper_template_decision` is `default-scaffold`, and `paper_template_final_reminder_acknowledged` is `false`, ask one final reminder question before finalizing
139
139
  - if the user confirms staying on the default scaffold at that final reminder, persist `paper_template_final_reminder_acknowledged: true`
140
+ - During ordinary `write` drafting rounds, draft manuscript text in `workflow_language`.
141
+ - If the current write target is a final manuscript export, `workflow_language` and `paper_language` differ, and `paper_language_finalization_decision` is `unconfirmed`, ask one explicit question before finalizing: keep the manuscript in `workflow_language`, or convert the final manuscript to `paper_language`
142
+ - If the user chooses to keep the draft language, persist `paper_language_finalization_decision: keep-workflow-language`
143
+ - If the user chooses to convert, persist `paper_language_finalization_decision: convert-to-paper-language`
140
144
  - While the real experiment process is still alive, emit only a progress update and keep waiting. Do not present a terminal summary for that rung until the process exits or the rung hits an explicit stop boundary.
@@ -27,7 +27,6 @@
27
27
  - `.lab/context/state.md`
28
28
  - `.lab/context/decisions.md`
29
29
  - `.lab/context/data-decisions.md`
30
- - `.lab/context/session-brief.md`
31
30
 
32
31
  ## Required Artifact
33
32
 
@@ -26,7 +26,6 @@
26
26
  - `.lab/context/state.md`
27
27
  - `.lab/context/decisions.md`
28
28
  - `.lab/context/terminology-lock.md`
29
- - `.lab/context/session-brief.md`
30
29
 
31
30
  ## Required Artifact
32
31
 
@@ -2,6 +2,7 @@
2
2
 
3
3
  ## Required Output
4
4
 
5
+ - plain-language scenario description
5
6
  - one-sentence problem statement
6
7
  - why the problem matters in plain language
7
8
  - failure case
@@ -9,8 +10,11 @@
9
10
  - contribution category
10
11
  - breakthrough level
11
12
  - existing methods summary
13
+ - closest-prior-work comparison
12
14
  - why the proposed idea is better than existing methods
15
+ - rough plain-language approach description
13
16
  - three meaningful points
17
+ - literature scoping bundle with a default target of 20 sources, or an explicit explanation for a smaller scoped field
14
18
  - literature-backed framing
15
19
  - sourced datasets and metrics
16
20
  - credible baseline shortlist
@@ -27,10 +31,14 @@
27
31
  - Mark generated hypotheses clearly.
28
32
  - Do not merge them into one undifferentiated summary.
29
33
  - Ask one clarifying question at a time when a missing assumption would materially change the proposal.
34
+ - Build a source bundle before claiming novelty. The default target is 20 relevant sources split across closest prior work, recent strong papers, benchmark or evaluation papers, surveys or taxonomies, and adjacent-field work when useful.
35
+ - If the field is genuinely too narrow to support that target, say so explicitly and justify the smaller literature bundle instead of silently skipping the search.
36
+ - The idea artifact must follow the repository `workflow_language`, not whichever language is easiest locally.
30
37
  - Before writing the full artifact, give the user a short summary with the one-sentence problem, why current methods fail, and the three meaningful points.
31
38
 
32
39
  ## Context Read Set
33
40
 
41
+ - `.lab/config/workflow.json`
34
42
  - `.lab/context/mission.md`
35
43
  - `.lab/context/open-questions.md`
36
44
 
@@ -42,24 +50,33 @@
42
50
 
43
51
  ## Recommended Structure
44
52
 
45
- 1. One-sentence problem
46
- 2. Failure of existing methods
47
- 3. Idea classification, contribution category, and breakthrough level
48
- 4. Existing methods and shared assumptions
49
- 5. Why the proposed idea is better
50
- 6. Three meaningful points
51
- 7. Candidate approaches and recommendation
52
- 8. Dataset and metric candidates
53
- 9. Falsifiable hypothesis
54
- 10. Expert critique
55
- 11. Revised proposal
56
- 12. Approval gate
57
- 13. Minimum viable experiment
53
+ 1. Scenario
54
+ 2. One-sentence problem
55
+ 3. Why the problem matters
56
+ 4. Failure of existing methods
57
+ 5. Idea classification, contribution category, and breakthrough level
58
+ 6. Existing methods and shared assumptions
59
+ 7. Literature scoping bundle
60
+ 8. Closest-prior-work comparison
61
+ 9. Rough approach in plain language
62
+ 10. Why the proposed idea is better
63
+ 11. Three meaningful points
64
+ 12. Candidate approaches and recommendation
65
+ 13. Dataset, baseline, and metric candidates
66
+ 14. Falsifiable hypothesis
67
+ 15. Expert critique
68
+ 16. Revised proposal
69
+ 17. Approval gate
70
+ 18. Minimum viable experiment
58
71
 
59
72
  ## Writing Standard
60
73
 
61
74
  - Keep the problem statement short, concrete, and easy to scan.
75
+ - Explain the scenario, target user or beneficiary, and why the problem matters before talking about novelty.
62
76
  - State why the target problem matters before talking about the method.
63
77
  - Compare against existing methods explicitly, not by vague novelty language.
78
+ - Do not call something new without a literature-scoping bundle and a closest-prior comparison.
79
+ - Explain what current methods do, why they fall short, and roughly how the proposed idea would work in plain language.
64
80
  - The three meaningful points should each fit in one direct sentence.
81
+ - Before approval, run `.lab/.managed/scripts/validate_idea_artifact.py --idea <idea-artifact> --workflow-config .lab/config/workflow.json`.
65
82
  - Do not leave `.lab/context/mission.md` as an empty template after convergence; write the approved problem, why it matters, the current benchmark scope, and the approved direction back into canonical context.
@@ -12,9 +12,12 @@
12
12
  ## Config Read Set
13
13
 
14
14
  - `.lab/config/workflow.json`
15
+ - `workflow_language` from `.lab/config/workflow.json`
16
+ - `paper_language` from `.lab/config/workflow.json`
15
17
  - `paper_template_root` from `.lab/config/workflow.json`
16
18
  - `paper_template_decision` from `.lab/config/workflow.json`
17
19
  - `paper_template_final_reminder_acknowledged` from `.lab/config/workflow.json`
20
+ - `paper_language_finalization_decision` from `.lab/config/workflow.json`
18
21
 
19
22
  ## Context Read Set
20
23
 
@@ -58,6 +61,7 @@ Run these on every round:
58
61
 
59
62
  - Change one section or one clearly bounded subsection per round.
60
63
  - LaTeX is the required manuscript output format.
64
+ - Ordinary manuscript drafting rounds should follow `workflow_language`.
61
65
  - If `paper_template_root` is configured, inspect that template directory before drafting and align the manuscript structure to it.
62
66
  - Treat attached template directories as user-owned and potentially modified. Do not rewrite template files unless the user explicitly asks.
63
67
  - If no paper template is configured and `paper_template_decision` is `unconfirmed`, ask one explicit question before the first `.tex` drafting round: continue with the default LaTeX scaffold, or attach a template directory first.
@@ -72,14 +76,25 @@ Run these on every round:
72
76
  - `.lab/.managed/templates/paper-references.bib`
73
77
  - If the current round is a final manuscript export or final-draft pass, `paper_template_root` is still empty, `paper_template_decision` is `default-scaffold`, and `paper_template_final_reminder_acknowledged` is `false`, ask one final reminder question about switching to a template before finalizing.
74
78
  - If the user confirms staying on the default scaffold at that final reminder, persist `paper_template_final_reminder_acknowledged: true`.
79
+ - If the current round is a final manuscript export or final-draft pass, `workflow_language` and `paper_language` differ, and `paper_language_finalization_decision` is `unconfirmed`, ask one explicit question before finalizing: keep the manuscript in `workflow_language`, or convert the final manuscript to `paper_language`.
80
+ - If the user chooses to keep the draft language, persist `paper_language_finalization_decision: keep-workflow-language`.
81
+ - If the user chooses to convert, persist `paper_language_finalization_decision: convert-to-paper-language`.
82
+ - If `paper_language_finalization_decision` is `convert-to-paper-language`, the final manuscript output should be converted to `paper_language` before accepting the final round.
75
83
  - Load only the current section guide. Do not load every section guide at once.
76
84
  - Reuse example-bank structure, paragraph roles, sentence logic, and paper-facing LaTeX asset patterns when examples are bundled, but never copy wording verbatim.
77
85
  - Treat example cites and example file names as writing references, not as evidence for the current paper.
78
86
  - Build a compact mini-outline before prose.
79
- - Build the paper asset plan before prose when the section carries experimental or method claims:
80
- - decide which tables the section needs
81
- - decide which figure placeholders the section needs
82
- - decide which citations must appear in the section
87
+ - Build the paper asset plan before prose when the section carries introduction, experimental, method, related-work, or conclusion claims:
88
+ - record the asset coverage targets and gaps for the current paper
89
+ - record which table asset files the section needs, what each table should show, and which evidence supports it
90
+ - record which figure placeholder files the section needs, including the problem-setting or teaser figure when the paper needs reader-facing task setup
91
+ - record which analysis asset the paper needs beyond the main table and whether it will be a table or a figure
92
+ - record what each figure or analysis asset should show and why the reader needs it
93
+ - record which citation anchors must appear in the section and why each anchor matters
94
+ - Before drafting `introduction`, `method`, `experiments`, `related work`, or `conclusion`, run `.lab/.managed/scripts/validate_paper_plan.py --paper-plan .lab/writing/plan.md`.
95
+ - If the paper-plan validator fails, stop and fill `.lab/writing/plan.md` first instead of drafting prose.
96
+ - During ordinary draft rounds, run `.lab/.managed/scripts/validate_section_draft.py --section <section> --section-file <section-file> --mode draft` and `.lab/.managed/scripts/validate_paper_claims.py --section-file <section-file> --mode draft` after revising the active section.
97
+ - Treat draft-round output from the section and claim validators as warnings that must be recorded and addressed in the write-iteration artifact, not as immediate stop conditions.
83
98
  - For each subsection, explicitly include motivation, design, and technical advantage when applicable.
84
99
  - Avoid a writing style that reads like incremental patching of a naive baseline.
85
100
  - Keep terminology stable across the full paper.
@@ -93,10 +108,17 @@ Run these on every round:
93
108
  - `<deliverables_root>/paper/tables/main-results.tex`
94
109
  - `<deliverables_root>/paper/tables/ablations.tex`
95
110
  - If the current paper contains method or experiments claims, materialize figure placeholders with captions, labels, and explicit figure intent:
111
+ - `<deliverables_root>/paper/figures/problem-setting.tex`
96
112
  - `<deliverables_root>/paper/figures/method-overview.tex`
97
113
  - `<deliverables_root>/paper/figures/results-overview.tex`
114
+ - If the current paper contains experimental claims, materialize one additional paper-facing analysis asset with caption, label, and explicit asset intent:
115
+ - `<deliverables_root>/paper/analysis/analysis-asset.tex`
98
116
  - Table assets must use paper-facing LaTeX structure with `booktabs`, caption, label, and consistent precision.
99
117
  - Figure placeholders must explain what the final figure should show and why the reader needs it.
118
+ - Core asset coverage for a paper-facing final draft should include a problem-setting or teaser figure, a method overview figure, a results overview figure, a main-results table, an ablation table, and one additional analysis asset.
119
+ - Keep `.lab/writing/plan.md` synchronized with the current table plan, figure plan, citation plan, and section-to-asset map whenever manuscript assets change.
120
+ - For final-draft or export rounds, run `.lab/.managed/scripts/validate_section_draft.py --section <section> --section-file <section-file> --mode final` and `.lab/.managed/scripts/validate_paper_claims.py --section-file <section-file> --mode final` before accepting the round.
121
+ - If the final-round section or claim validators fail, keep editing the affected section until it passes; do not stop at asset-complete but rhetorically weak or unsafe prose.
100
122
  - Run `.lab/.managed/scripts/validate_manuscript_delivery.py --paper-dir <deliverables_root>/paper` before accepting a final-draft or export round.
101
123
  - If the manuscript validator fails, keep editing and asset generation until it passes; do not stop at prose-only completion.
102
124
  - Run a LaTeX compile smoke test when a local LaTeX toolchain is available; if not available, record the missing verification in the write iteration artifact.
@@ -117,8 +139,10 @@ Run these on every round:
117
139
  - `<deliverables_root>/paper/sections/<section>.tex`
118
140
  - `<deliverables_root>/paper/tables/main-results.tex`
119
141
  - `<deliverables_root>/paper/tables/ablations.tex`
142
+ - `<deliverables_root>/paper/figures/problem-setting.tex`
120
143
  - `<deliverables_root>/paper/figures/method-overview.tex`
121
144
  - `<deliverables_root>/paper/figures/results-overview.tex`
145
+ - `<deliverables_root>/paper/analysis/analysis-asset.tex`
122
146
 
123
147
  ## Stop Conditions
124
148
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "superlab",
3
- "version": "0.1.26",
3
+ "version": "0.1.28",
4
4
  "description": "Strict /lab research workflow installer for Codex and Claude",
5
5
  "keywords": [
6
6
  "codex",