@besser/research-paper-review 1.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +70 -0
- package/SKILL.md +144 -0
- package/package.json +22 -0
package/README.md
ADDED
|
@@ -0,0 +1,70 @@
|
|
|
1
|
+
# research-paper-review
|
|
2
|
+
|
|
3
|
+
An AI agent skill for systematic academic paper review.
|
|
4
|
+
|
|
5
|
+
## What It Does
|
|
6
|
+
|
|
7
|
+
When you share a research paper (PDF, LaTeX, or plain text), this skill guides the agent through a structured review process:
|
|
8
|
+
|
|
9
|
+
1. **Venue context** — Adapts the review to the target conference/journal standards
|
|
10
|
+
2. **Structured summary** — Problem, contributions, methodology, results, limitations
|
|
11
|
+
3. **Numerical & consistency checks** — Cross-references numbers across text, tables, and figures; verifies statistics, acronyms, terminology, and citations
|
|
12
|
+
4. **Critical analysis** — Evaluates novelty, soundness, significance, clarity, reproducibility, and venue alignment
|
|
13
|
+
5. **Actionable feedback** — Strengths, weaknesses, questions, minor issues, and venue-specific recommendations
|
|
14
|
+
6. **Top 10 actions** — Prioritized by impact-to-effort ratio so authors know where to start
|
|
15
|
+
|
|
16
|
+
## Installation
|
|
17
|
+
|
|
18
|
+
### Option A: Agent skills CLI (Claude Code, Cursor, Copilot, Windsurf, etc.)
|
|
19
|
+
|
|
20
|
+
```bash
|
|
21
|
+
npx skills add BESSER-PEARL/agent-skills@research-paper-review
|
|
22
|
+
```
|
|
23
|
+
|
|
24
|
+
This auto-installs the skill into your local agent. It activates automatically when you share a paper.
|
|
25
|
+
|
|
26
|
+
### Option B: ChatGPT, Claude, Gemini, or any other AI chat
|
|
27
|
+
|
|
28
|
+
No installation needed. Just:
|
|
29
|
+
|
|
30
|
+
1. Copy the contents of [SKILL.md](./SKILL.md)
|
|
31
|
+
2. Paste it as:
|
|
32
|
+
- **ChatGPT** → Custom Instructions or start of conversation
|
|
33
|
+
- **Claude.ai** → Project Knowledge or start of conversation
|
|
34
|
+
- **Gemini** → Gems or start of conversation
|
|
35
|
+
3. Upload your paper (PDF) and ask for a review
|
|
36
|
+
|
|
37
|
+
> **Tip:** For best results, paste the SKILL.md content first, then upload the paper and write:
|
|
38
|
+
> *"Review this paper for [VENUE] as a [TYPE] submission."*
|
|
39
|
+
|
|
40
|
+
## Usage
|
|
41
|
+
|
|
42
|
+
Share a paper with the agent and optionally specify the venue and paper type:
|
|
43
|
+
|
|
44
|
+
```
|
|
45
|
+
Review this paper for ICWE 2026 as a tool demo submission.
|
|
46
|
+
```
|
|
47
|
+
|
|
48
|
+
```
|
|
49
|
+
Check this PDF for numerical inconsistencies and broken references.
|
|
50
|
+
```
|
|
51
|
+
|
|
52
|
+
```
|
|
53
|
+
Give me pre-submission feedback on our TOSEM journal paper.
|
|
54
|
+
```
|
|
55
|
+
|
|
56
|
+
The agent will fetch venue-specific guidelines when available and produce a review following the output template in [SKILL.md](./SKILL.md).
|
|
57
|
+
|
|
58
|
+
## Supported Formats
|
|
59
|
+
|
|
60
|
+
- PDF files (read page by page)
|
|
61
|
+
- LaTeX source (`.tex` files, follows `\input` / `\include` commands)
|
|
62
|
+
- Plain text / markdown
|
|
63
|
+
|
|
64
|
+
## Authors
|
|
65
|
+
|
|
66
|
+
- [Armen Sulejmani](https://github.com/armensulejmani)
|
|
67
|
+
- [Ivan David Alfonso](https://github.com/ivan-alfonso)
|
|
68
|
+
- [Jordi Cabot](https://github.com/jcabot)
|
|
69
|
+
|
|
70
|
+
Part of the [BESSER-PEARL](https://github.com/BESSER-PEARL) project at the Luxembourg Institute of Science and Technology.
|
package/SKILL.md
ADDED
|
@@ -0,0 +1,144 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: research-paper-review
|
|
3
|
+
description: Review and analyze academic research papers. Use this skill when the user asks to review a paper, analyze a publication, summarize research, critique methodology, extract key findings, compare papers, check for numerical inconsistencies, or assess novelty and contributions of academic work. Also triggers when the user mentions reading a PDF of a paper, wants a literature review, asks about related work, or wants to improve a paper before submission.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Research Paper Review
|
|
7
|
+
|
|
8
|
+
Assist researchers in reviewing, analyzing, and criticizing academic papers systematically and thoroughly.
|
|
9
|
+
|
|
10
|
+
## When to Use
|
|
11
|
+
|
|
12
|
+
- User asks to review, summarize, or critique a research paper
|
|
13
|
+
- User shares a PDF or link to an academic paper
|
|
14
|
+
- User wants to assess methodology, contributions, or novelty
|
|
15
|
+
- User needs help writing a peer review
|
|
16
|
+
- User wants to compare multiple papers or do a literature survey
|
|
17
|
+
- User wants to improve a paper before submission (pre-submission review)
|
|
18
|
+
- User wants to check for numerical/statistical inconsistencies
|
|
19
|
+
- User wants venue-specific feedback (conference, journal, or preprint)
|
|
20
|
+
|
|
21
|
+
## Inputs
|
|
22
|
+
|
|
23
|
+
- Paper content: PDF, LaTeX source, or plaint text
|
|
24
|
+
- Target Venue (optional but recommended): conference, journal, or preprint target
|
|
25
|
+
- Example: "Models 2026", "TOSEM", "Sosym", "arXiv preprint"
|
|
26
|
+
- Type of paper (optional but recommended): "full research" paper, "short" paper, "new ideas" paper, "tool demo", "poster",...
|
|
27
|
+
- Explicit reviewing guidelines (optinoal): if available provide a description or URL with the reviewing criteria.
|
|
28
|
+
|
|
29
|
+
## Review Workflow
|
|
30
|
+
|
|
31
|
+
###Step 0: Pre-Processing / Venue Context
|
|
32
|
+
|
|
33
|
+
- If target venue and/or the type of paper are provided, include them as context for all subsequent steps:
|
|
34
|
+
|
|
35
|
+
> “Review this paper as if it is intended for [TARGET VENUE] as a [TYPE PAPER] submission. Consider typical standards, expectations, page limits, scope, and audience for this venue and type of paper”
|
|
36
|
+
|
|
37
|
+
- Optional: Use the provided reviewing guidelines or try to find them on the venue website (if available) with standards for methodology, novelty, empirical rigor, validation and formatting.
|
|
38
|
+
|
|
39
|
+
### Step 1: Read the Paper
|
|
40
|
+
|
|
41
|
+
Identify the format and read accordingly:
|
|
42
|
+
|
|
43
|
+
- **PDF**: Use the Read tool with the `pages` parameter for large documents (max 20 pages per request).
|
|
44
|
+
- **LaTeX source**: Read the main `.tex` file first. Look for `\input{}` or `\include{}` commands to find additional sections, figures, and bibliography files. Use Grep to search for key commands like `\begin{abstract}`, `\section`, `\cite` across all `.tex` files.
|
|
45
|
+
- **Multiple files**: Use Glob with `**/*.tex` to find all source files, then read them in logical order (main file → sections → appendix).
|
|
46
|
+
|
|
47
|
+
In all cases, skim the abstract, introduction, and conclusion first to get the big picture before diving into details.
|
|
48
|
+
|
|
49
|
+
### Step 2: Structured Summary
|
|
50
|
+
|
|
51
|
+
Show that you understand the paper by producing a summary covering:
|
|
52
|
+
|
|
53
|
+
1. **Problem Statement** — What problem does the paper address? Why does it matter?
|
|
54
|
+
2. **Contributions** — What are the claimed contributions? (list them)
|
|
55
|
+
3. **Approach/Methodology** — How do the authors solve the problem?
|
|
56
|
+
4. **Key Results** — What are the main findings/metrics?
|
|
57
|
+
5. **Limitations** — What are the acknowledged (and unacknowledged) limitations?
|
|
58
|
+
|
|
59
|
+
|
|
60
|
+
### Step 3: Numerical & Consistency Checks - most info from linkedin
|
|
61
|
+
|
|
62
|
+
This is where LLM-assisted review adds the most value, catching things humans easily miss during manual review. Run these checks systematically:
|
|
63
|
+
|
|
64
|
+
- **Numbers across text, tables, and figures**: Do values reported in the text match what's in the tables? Do figures reflect the data described?
|
|
65
|
+
- **Statistical consistency**: Do p-values, confidence intervals, and effect sizes align? Are sample sizes consistent throughout?
|
|
66
|
+
- **Calculations**: Verify percentages, averages, sums. Check that reported improvements (e.g., "30% improvement") match the actual numbers.
|
|
67
|
+
- **Internal references**: Do all \ref, \cite, figure/table references resolve? Are there dangling references or wrong numbering?
|
|
68
|
+
- **Acronyms**: Are all acronyms defined on first use?
|
|
69
|
+
- **Terminology consistency**: Is the same concept always referred to with the same term?
|
|
70
|
+
- **Citations**: Do all citations exist? Is citation style uniform (i.e. all conference papers are cited using the same fields, same for other venues)
|
|
71
|
+
|
|
72
|
+
Even minor errors (typos, broken references, wrong numbering) matter. reviewers often use these as signals that the paper was not carefully prepared.
|
|
73
|
+
|
|
74
|
+
### Step 4: Critical Analysis
|
|
75
|
+
|
|
76
|
+
Evaluate the paper on these dimensions:
|
|
77
|
+
|
|
78
|
+
| Dimension | Questions to Answer |
|
|
79
|
+
|-----------|-------------------|
|
|
80
|
+
| **Novelty** | Is this genuinely new? How does it differ from prior work? |
|
|
81
|
+
| **Soundness** | Is the methodology rigorous? Are experiments well-designed? |
|
|
82
|
+
| **Significance** | Does this advance the field meaningfully? |
|
|
83
|
+
| **Clarity** | Is the paper well-written and well-structured? |
|
|
84
|
+
| **Reproducibility** | Could someone replicate this work from the paper alone? |
|
|
85
|
+
| **Related Work** | Is the positioning against prior work fair and complete? |
|
|
86
|
+
| **Venue Alignment** | Does the paper meet expectations of the target venue (scope, depth, format, length, contribution type)? |
|
|
87
|
+
|
|
88
|
+
|
|
89
|
+
### Step 5: Provide Actionable Feedback
|
|
90
|
+
|
|
91
|
+
Structure feedback as:
|
|
92
|
+
|
|
93
|
+
- **Strengths** — What the paper does well (be specific, cite sections)
|
|
94
|
+
- **Weaknesses** — What could be improved (be constructive, suggest fixes)
|
|
95
|
+
- **Questions for Authors** — Things that need clarification
|
|
96
|
+
- **Minor Issues** — Typos, formatting, citation issues, broken references
|
|
97
|
+
- **Venue-Specific Recommendations** — Highlight alignment issues, potential improvements to meet venue expectations
|
|
98
|
+
|
|
99
|
+
### Step 6: Start here
|
|
100
|
+
|
|
101
|
+
Write down a list of the top 10 most immediate actions that the author should address.
|
|
102
|
+
|
|
103
|
+
These should be the ones that will bring the best "bang for the buck", i.e. actions that generate the most benefit relative to the cost of implementing them.
|
|
104
|
+
|
|
105
|
+
|
|
106
|
+
## Output Format
|
|
107
|
+
|
|
108
|
+
Use this template for the review:
|
|
109
|
+
|
|
110
|
+
```markdown
|
|
111
|
+
# Paper Review: [Title]
|
|
112
|
+
|
|
113
|
+
## Summary
|
|
114
|
+
[2-3 paragraph summary]
|
|
115
|
+
|
|
116
|
+
## Strengths
|
|
117
|
+
- S1: ...
|
|
118
|
+
- S2: ...
|
|
119
|
+
|
|
120
|
+
## Weaknesses
|
|
121
|
+
- W1: ...
|
|
122
|
+
- W2: ...
|
|
123
|
+
|
|
124
|
+
## Questions for Authors
|
|
125
|
+
- Q1: ...
|
|
126
|
+
|
|
127
|
+
## Minor Issues
|
|
128
|
+
- ...
|
|
129
|
+
|
|
130
|
+
## Venue-Specific Recommendations
|
|
131
|
+
- V1: ...
|
|
132
|
+
- V2: ...
|
|
133
|
+
|
|
134
|
+
## Overall Assessment
|
|
135
|
+
[1 paragraph verdict: accept/revise/reject with justification]
|
|
136
|
+
|
|
137
|
+
## Top actions . Start here
|
|
138
|
+
- T1: ...
|
|
139
|
+
- T2: ...
|
|
140
|
+
|
|
141
|
+
|
|
142
|
+
## Confidence
|
|
143
|
+
[Your confidence level in this review: low/medium/high, and why]
|
|
144
|
+
```
|
package/package.json
ADDED
|
@@ -0,0 +1,22 @@
|
|
|
1
|
+
{
|
|
2
|
+
"name": "@besser/research-paper-review",
|
|
3
|
+
"version": "1.0.0",
|
|
4
|
+
"description": "Review and analyze academic research papers — consistency checks, structured critique, venue-specific feedback, pre-submission review",
|
|
5
|
+
"author": "BESSER-PEARL",
|
|
6
|
+
"contributors": [
|
|
7
|
+
"Armen Sulejmani",
|
|
8
|
+
"Ivan David Alfonso",
|
|
9
|
+
"Jordi Cabot"
|
|
10
|
+
],
|
|
11
|
+
"keywords": [
|
|
12
|
+
"skill",
|
|
13
|
+
"research",
|
|
14
|
+
"paper-review",
|
|
15
|
+
"academic",
|
|
16
|
+
"peer-review",
|
|
17
|
+
"literature-review",
|
|
18
|
+
"pre-submission",
|
|
19
|
+
"numerical-consistency"
|
|
20
|
+
],
|
|
21
|
+
"license": "MIT"
|
|
22
|
+
}
|