@wentorai/research-plugins 1.2.2 → 1.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (141) hide show
  1. package/README.md +16 -8
  2. package/openclaw.plugin.json +10 -3
  3. package/package.json +2 -5
  4. package/skills/analysis/dataviz/SKILL.md +25 -0
  5. package/skills/analysis/dataviz/chart-image-generator/SKILL.md +1 -1
  6. package/skills/analysis/econometrics/SKILL.md +23 -0
  7. package/skills/analysis/econometrics/robustness-checks/SKILL.md +1 -1
  8. package/skills/analysis/statistics/SKILL.md +21 -0
  9. package/skills/analysis/statistics/data-anomaly-detection/SKILL.md +1 -1
  10. package/skills/analysis/statistics/ml-experiment-tracker/SKILL.md +1 -1
  11. package/skills/analysis/statistics/{senior-data-scientist-guide → modeling-strategy-guide}/SKILL.md +5 -5
  12. package/skills/analysis/wrangling/SKILL.md +21 -0
  13. package/skills/analysis/wrangling/csv-data-analyzer/SKILL.md +1 -1
  14. package/skills/analysis/wrangling/data-cog-guide/SKILL.md +1 -1
  15. package/skills/domains/ai-ml/SKILL.md +37 -0
  16. package/skills/domains/biomedical/SKILL.md +28 -0
  17. package/skills/domains/biomedical/genomas-guide/SKILL.md +1 -1
  18. package/skills/domains/biomedical/med-researcher-guide/SKILL.md +1 -1
  19. package/skills/domains/biomedical/medgeclaw-guide/SKILL.md +1 -1
  20. package/skills/domains/business/SKILL.md +17 -0
  21. package/skills/domains/business/architecture-design-guide/SKILL.md +1 -1
  22. package/skills/domains/chemistry/SKILL.md +19 -0
  23. package/skills/domains/chemistry/computational-chemistry-guide/SKILL.md +1 -1
  24. package/skills/domains/cs/SKILL.md +21 -0
  25. package/skills/domains/ecology/SKILL.md +16 -0
  26. package/skills/domains/economics/SKILL.md +20 -0
  27. package/skills/domains/economics/post-labor-economics/SKILL.md +1 -1
  28. package/skills/domains/economics/pricing-psychology-guide/SKILL.md +1 -1
  29. package/skills/domains/education/SKILL.md +19 -0
  30. package/skills/domains/education/academic-study-methods/SKILL.md +1 -1
  31. package/skills/domains/education/edumcp-guide/SKILL.md +1 -1
  32. package/skills/domains/finance/SKILL.md +19 -0
  33. package/skills/domains/finance/akshare-finance-data/SKILL.md +1 -1
  34. package/skills/domains/finance/options-analytics-agent-guide/SKILL.md +1 -1
  35. package/skills/domains/finance/stata-accounting-research/SKILL.md +1 -1
  36. package/skills/domains/geoscience/SKILL.md +17 -0
  37. package/skills/domains/humanities/SKILL.md +16 -0
  38. package/skills/domains/humanities/history-research-guide/SKILL.md +1 -1
  39. package/skills/domains/humanities/political-history-guide/SKILL.md +1 -1
  40. package/skills/domains/law/SKILL.md +19 -0
  41. package/skills/domains/math/SKILL.md +17 -0
  42. package/skills/domains/pharma/SKILL.md +17 -0
  43. package/skills/domains/physics/SKILL.md +16 -0
  44. package/skills/domains/social-science/SKILL.md +17 -0
  45. package/skills/domains/social-science/sociology-research-methods/SKILL.md +1 -1
  46. package/skills/literature/discovery/SKILL.md +20 -0
  47. package/skills/literature/discovery/paper-recommendation-guide/SKILL.md +1 -1
  48. package/skills/literature/discovery/semantic-paper-radar/SKILL.md +1 -1
  49. package/skills/literature/fulltext/SKILL.md +26 -0
  50. package/skills/literature/metadata/SKILL.md +35 -0
  51. package/skills/literature/metadata/doi-content-negotiation/SKILL.md +4 -0
  52. package/skills/literature/metadata/doi-resolution-guide/SKILL.md +4 -0
  53. package/skills/literature/metadata/orcid-api/SKILL.md +4 -0
  54. package/skills/literature/metadata/orcid-integration-guide/SKILL.md +4 -0
  55. package/skills/literature/search/SKILL.md +43 -0
  56. package/skills/literature/search/paper-search-mcp-guide/SKILL.md +1 -1
  57. package/skills/research/automation/SKILL.md +21 -0
  58. package/skills/research/deep-research/SKILL.md +24 -0
  59. package/skills/research/deep-research/auto-deep-research-guide/SKILL.md +1 -1
  60. package/skills/research/deep-research/in-depth-research-guide/SKILL.md +1 -1
  61. package/skills/research/funding/SKILL.md +20 -0
  62. package/skills/research/methodology/SKILL.md +24 -0
  63. package/skills/research/paper-review/SKILL.md +19 -0
  64. package/skills/research/paper-review/paper-critique-framework/SKILL.md +1 -1
  65. package/skills/tools/code-exec/SKILL.md +18 -0
  66. package/skills/tools/diagram/SKILL.md +20 -0
  67. package/skills/tools/document/SKILL.md +21 -0
  68. package/skills/tools/knowledge-graph/SKILL.md +21 -0
  69. package/skills/tools/ocr-translate/SKILL.md +18 -0
  70. package/skills/tools/ocr-translate/handwriting-recognition-guide/SKILL.md +2 -0
  71. package/skills/tools/ocr-translate/latex-ocr-guide/SKILL.md +2 -0
  72. package/skills/tools/scraping/SKILL.md +17 -0
  73. package/skills/writing/citation/SKILL.md +33 -0
  74. package/skills/writing/citation/zotfile-attachment-guide/SKILL.md +2 -0
  75. package/skills/writing/composition/SKILL.md +22 -0
  76. package/skills/writing/composition/research-paper-writer/SKILL.md +1 -1
  77. package/skills/writing/composition/scientific-writing-wrapper/SKILL.md +1 -1
  78. package/skills/writing/latex/SKILL.md +22 -0
  79. package/skills/writing/latex/academic-writing-latex/SKILL.md +1 -1
  80. package/skills/writing/latex/latex-drawing-guide/SKILL.md +1 -1
  81. package/skills/writing/polish/SKILL.md +20 -0
  82. package/skills/writing/polish/chinese-text-humanizer/SKILL.md +1 -1
  83. package/skills/writing/templates/SKILL.md +22 -0
  84. package/skills/writing/templates/beamer-presentation-guide/SKILL.md +1 -1
  85. package/skills/writing/templates/scientific-article-pdf/SKILL.md +1 -1
  86. package/skills/analysis/dataviz/citation-map-guide/SKILL.md +0 -184
  87. package/skills/analysis/dataviz/data-visualization-principles/SKILL.md +0 -171
  88. package/skills/analysis/econometrics/empirical-paper-analysis/SKILL.md +0 -192
  89. package/skills/analysis/econometrics/panel-data-regression-workflow/SKILL.md +0 -267
  90. package/skills/analysis/econometrics/stata-regression/SKILL.md +0 -117
  91. package/skills/analysis/statistics/general-statistics-guide/SKILL.md +0 -226
  92. package/skills/analysis/statistics/infiagent-benchmark-guide/SKILL.md +0 -106
  93. package/skills/analysis/statistics/pywayne-statistics-guide/SKILL.md +0 -192
  94. package/skills/analysis/statistics/quantitative-methods-guide/SKILL.md +0 -193
  95. package/skills/analysis/wrangling/claude-data-analysis-guide/SKILL.md +0 -100
  96. package/skills/analysis/wrangling/open-data-scientist-guide/SKILL.md +0 -197
  97. package/skills/domains/ai-ml/annotated-dl-papers-guide/SKILL.md +0 -159
  98. package/skills/domains/humanities/digital-humanities-methods/SKILL.md +0 -232
  99. package/skills/domains/law/legal-research-methods/SKILL.md +0 -190
  100. package/skills/domains/social-science/sociology-research-guide/SKILL.md +0 -238
  101. package/skills/literature/discovery/arxiv-paper-monitoring/SKILL.md +0 -233
  102. package/skills/literature/discovery/paper-tracking-guide/SKILL.md +0 -211
  103. package/skills/literature/fulltext/zotero-scihub-guide/SKILL.md +0 -168
  104. package/skills/literature/search/arxiv-osiris/SKILL.md +0 -199
  105. package/skills/literature/search/deepgit-search-guide/SKILL.md +0 -147
  106. package/skills/literature/search/multi-database-literature-search/SKILL.md +0 -198
  107. package/skills/literature/search/papers-chat-guide/SKILL.md +0 -194
  108. package/skills/literature/search/pasa-paper-search-guide/SKILL.md +0 -138
  109. package/skills/literature/search/scientify-literature-survey/SKILL.md +0 -203
  110. package/skills/research/automation/ai-scientist-guide/SKILL.md +0 -228
  111. package/skills/research/automation/coexist-ai-guide/SKILL.md +0 -149
  112. package/skills/research/automation/foam-agent-guide/SKILL.md +0 -203
  113. package/skills/research/automation/research-paper-orchestrator/SKILL.md +0 -254
  114. package/skills/research/deep-research/academic-deep-research/SKILL.md +0 -190
  115. package/skills/research/deep-research/cognitive-kernel-guide/SKILL.md +0 -200
  116. package/skills/research/deep-research/corvus-research-guide/SKILL.md +0 -132
  117. package/skills/research/deep-research/deep-research-pro/SKILL.md +0 -213
  118. package/skills/research/deep-research/deep-research-work/SKILL.md +0 -204
  119. package/skills/research/deep-research/research-cog/SKILL.md +0 -153
  120. package/skills/research/methodology/academic-mentor-guide/SKILL.md +0 -169
  121. package/skills/research/methodology/deep-innovator-guide/SKILL.md +0 -242
  122. package/skills/research/methodology/research-pipeline-units-guide/SKILL.md +0 -169
  123. package/skills/research/paper-review/paper-compare-guide/SKILL.md +0 -238
  124. package/skills/research/paper-review/paper-digest-guide/SKILL.md +0 -240
  125. package/skills/research/paper-review/paper-research-assistant/SKILL.md +0 -231
  126. package/skills/research/paper-review/research-quality-filter/SKILL.md +0 -261
  127. package/skills/tools/code-exec/contextplus-mcp-guide/SKILL.md +0 -110
  128. package/skills/tools/diagram/clawphd-guide/SKILL.md +0 -149
  129. package/skills/tools/diagram/scientific-graphical-abstract/SKILL.md +0 -201
  130. package/skills/tools/document/md2pdf-xelatex/SKILL.md +0 -212
  131. package/skills/tools/document/openpaper-guide/SKILL.md +0 -232
  132. package/skills/tools/document/weknora-guide/SKILL.md +0 -216
  133. package/skills/tools/knowledge-graph/mimir-memory-guide/SKILL.md +0 -135
  134. package/skills/tools/knowledge-graph/open-webui-tools-guide/SKILL.md +0 -156
  135. package/skills/tools/ocr-translate/formula-recognition-guide/SKILL.md +0 -367
  136. package/skills/tools/ocr-translate/math-equation-renderer/SKILL.md +0 -198
  137. package/skills/tools/scraping/api-data-collection-guide/SKILL.md +0 -301
  138. package/skills/writing/citation/academic-citation-manager-guide/SKILL.md +0 -182
  139. package/skills/writing/composition/opendraft-thesis-guide/SKILL.md +0 -200
  140. package/skills/writing/composition/paper-debugger-guide/SKILL.md +0 -143
  141. package/skills/writing/composition/paperforge-guide/SKILL.md +0 -205
@@ -1,231 +0,0 @@
1
- ---
2
- name: paper-research-assistant
3
- description: "Read papers, generate structured reports, find code and datasets"
4
- metadata:
5
- openclaw:
6
- emoji: "📄"
7
- category: "research"
8
- subcategory: "paper-review"
9
- keywords: ["paper reading", "research report", "code extraction", "dataset discovery", "paper analysis", "structured summary"]
10
- source: "https://github.com/AcademicSkills/paper-research-assistant"
11
- ---
12
-
13
- # Paper Research Assistant
14
-
15
- A skill for systematically reading academic papers and generating structured analysis reports. Goes beyond simple summarization by extracting methodology details, identifying associated code repositories and datasets, evaluating reproducibility, and generating actionable summaries suitable for lab meetings, journal clubs, or literature review inclusion.
16
-
17
- ## Overview
18
-
19
- Reading academic papers efficiently is a core skill for researchers, yet many lack a systematic approach. The result is either superficial skimming that misses critical details or time-consuming deep reads of papers that turn out to be marginally relevant. This skill provides a structured reading protocol that adapts its depth to the paper's relevance, extracts standardized metadata, and produces reports that can be shared with collaborators or filed for future reference.
20
-
21
- A distinguishing feature is the automated search for associated resources: code repositories, datasets, pre-trained models, and supplementary materials that are often scattered across GitHub, institutional pages, and data repositories. These resources are essential for reproducibility and building upon published work.
22
-
23
- ## Structured Reading Protocol
24
-
25
- ### Three-Pass Reading Method
26
-
27
- ```
28
- Pass 1: SURVEY (5-10 minutes)
29
- Read: Title, abstract, introduction (last paragraph), section headings,
30
- figures/tables (captions only), conclusion (first paragraph)
31
- Decide: Is this paper relevant enough for a deeper read?
32
- Output: One-paragraph relevance assessment
33
-
34
- Pass 2: COMPREHENSION (30-60 minutes)
35
- Read: Full paper, but skip mathematical derivations and implementation details
36
- Focus: What problem? What approach? What results? What limitations?
37
- Mark: Key claims, novel contributions, and points of confusion
38
- Output: Structured summary (see template below)
39
-
40
- Pass 3: CRITICAL ANALYSIS (30-60 minutes, only for highly relevant papers)
41
- Read: Every detail including proofs, appendices, supplementary materials
42
- Focus: Are the claims justified? Are there hidden assumptions?
43
- Could I reproduce this? What would I do differently?
44
- Output: Critical evaluation with reproducibility assessment
45
- ```
46
-
47
- ### Structured Summary Template
48
-
49
- ```yaml
50
- paper_summary:
51
- title: ""
52
- authors: []
53
- venue: "" # journal/conference name
54
- year: 0
55
- doi: ""
56
-
57
- classification:
58
- type: "" # empirical, theoretical, methodological, review, position
59
- domain: ""
60
- subdomain: ""
61
-
62
- core_content:
63
- problem: |
64
- What specific problem does this paper address?
65
- Why is it important?
66
- approach: |
67
- What method/framework/model is proposed?
68
- What is novel about the approach?
69
- key_results:
70
- - result: ""
71
- metric: ""
72
- value: ""
73
- baseline_comparison: ""
74
- limitations:
75
- - ""
76
- future_work:
77
- - ""
78
-
79
- methodology:
80
- data:
81
- datasets_used: []
82
- sample_size: ""
83
- data_availability: "" # public, restricted, proprietary
84
- method:
85
- type: "" # experimental, observational, simulation, etc.
86
- tools: [] # software, libraries, frameworks used
87
- reproducibility_score: "" # high, medium, low
88
- evaluation:
89
- metrics: []
90
- baselines: []
91
- statistical_tests: []
92
-
93
- resources:
94
- code_repository: ""
95
- datasets: []
96
- pretrained_models: []
97
- supplementary_url: ""
98
- demo_url: ""
99
-
100
- assessment:
101
- relevance_to_my_work: "" # high, medium, low
102
- quality_rating: "" # 1-5 scale
103
- key_takeaway: ""
104
- follow_up_actions: []
105
- ```
106
-
107
- ## Code and Dataset Discovery
108
-
109
- ### Finding Associated Resources
110
-
111
- ```python
112
- def find_paper_resources(title: str, authors: list, doi: str = None) -> dict:
113
- """
114
- Search for code, datasets, and other resources associated with a paper.
115
-
116
- Search locations (in priority order):
117
- 1. Paper itself (check "Code Availability" section)
118
- 2. Papers With Code (paperswithcode.com)
119
- 3. GitHub search (title + author name)
120
- 4. Author institutional pages
121
- 5. Zenodo / Figshare (for datasets)
122
- 6. Hugging Face (for models and datasets)
123
- """
124
- resources = {
125
- 'code': [],
126
- 'datasets': [],
127
- 'models': [],
128
- 'supplementary': []
129
- }
130
-
131
- # Strategy 1: Papers With Code
132
- # Search: https://paperswithcode.com/search?q={title}
133
- # Returns: code repos, datasets, benchmark results
134
-
135
- # Strategy 2: GitHub search
136
- # Query: "{title}" OR "{first_author} {key_method_term}"
137
- # Filter: recently updated, has README, has stars
138
-
139
- # Strategy 3: Zenodo/Figshare
140
- # Query: DOI or title
141
- # Filter: type=dataset
142
-
143
- # Strategy 4: Hugging Face
144
- # Query: paper title or model name
145
- # Filter: models, datasets
146
-
147
- # Strategy 5: Google search
148
- # Query: "{title}" (code OR github OR repository)
149
- # Query: "{title}" (dataset OR data OR download)
150
-
151
- return resources
152
- ```
153
-
154
- ### Reproducibility Assessment
155
-
156
- | Factor | Score 3 (High) | Score 2 (Medium) | Score 1 (Low) |
157
- |--------|---------------|-----------------|--------------|
158
- | Code availability | Public repo with instructions | Code available on request | No code |
159
- | Data availability | Public dataset with DOI | Available on request | Proprietary |
160
- | Method description | Sufficient to reimplement | Missing some details | Vague/incomplete |
161
- | Hyperparameters | All reported | Key ones reported | Not reported |
162
- | Environment | Docker/requirements.txt | Software versions listed | Not specified |
163
- | Random seeds | Fixed and reported | Fixed but not reported | Not controlled |
164
-
165
- ## Report Generation
166
-
167
- ### Lab Meeting Presentation Format
168
-
169
- ```markdown
170
- ## [Paper Title] - [First Author] et al. ([Year])
171
-
172
- ### Problem
173
- [2-3 sentences on the problem and why it matters]
174
-
175
- ### Approach
176
- [3-4 sentences on the method, emphasizing what is novel]
177
-
178
- ### Key Results
179
- - [Result 1 with specific numbers]
180
- - [Result 2 with specific numbers]
181
- - [Comparison to best baseline]
182
-
183
- ### Strengths
184
- - [Strength 1]
185
- - [Strength 2]
186
-
187
- ### Weaknesses / Questions
188
- - [Weakness 1]
189
- - [Question for discussion]
190
-
191
- ### Relevance to Our Work
192
- [1-2 sentences on how this connects to the lab's research]
193
-
194
- ### Resources
195
- - Code: [URL or "not available"]
196
- - Data: [URL or "not available"]
197
- ```
198
-
199
- ### Literature Review Entry Format
200
-
201
- For inclusion in a systematic literature review, generate:
202
-
203
- 1. **Bibliographic entry**: Full citation in the target format (APA, Vancouver, etc.).
204
- 2. **Data extraction row**: Structured data for the evidence synthesis table.
205
- 3. **Quality assessment**: Score on the relevant quality assessment tool (CASP, Newcastle-Ottawa, etc.).
206
- 4. **Synthesis note**: How this paper relates to others in the review.
207
-
208
- ## Batch Processing Workflow
209
-
210
- When reading multiple papers on the same topic:
211
-
212
- 1. **Triage**: Read Pass 1 for all papers. Sort by relevance.
213
- 2. **Prioritize**: Full read (Pass 2+3) only for high-relevance papers.
214
- 3. **Cross-reference**: After reading all papers, build a comparison matrix.
215
- 4. **Synthesize**: Identify points of agreement, disagreement, and gaps.
216
- 5. **File**: Store all structured summaries for future retrieval.
217
-
218
- ## Best Practices
219
-
220
- - Always start with Pass 1. Do not commit to a deep read before assessing relevance.
221
- - Read the figures and tables early. In empirical papers, they often tell the core story.
222
- - Note your questions and confusions during reading. These often point to genuine gaps or weaknesses.
223
- - Search for code and datasets before attempting to reproduce results manually.
224
- - When a paper cites a finding as established, trace back to the original source and verify.
225
- - Keep a running log of papers read with one-sentence summaries for quick future reference.
226
-
227
- ## References
228
-
229
- - Keshav, S. (2007). How to Read a Paper. *ACM SIGCOMM Computer Communication Review*, 37(3), 83-84.
230
- - Pautasso, M. (2013). Ten Simple Rules for Writing a Literature Review. *PLoS Computational Biology*, 9(7).
231
- - Raff, E. (2019). A Step Toward Quantifying Independently Reproducible Machine Learning Research. *NeurIPS 2019*.
@@ -1,261 +0,0 @@
1
- ---
2
- name: research-quality-filter
3
- description: "Filter and assess research paper quality using structured criteria"
4
- metadata:
5
- openclaw:
6
- emoji: "🏷️"
7
- category: "research"
8
- subcategory: "paper-review"
9
- keywords: ["quality assessment", "paper filtering", "evidence grading", "critical appraisal", "study quality", "screening"]
10
- source: "https://github.com/AcademicSkills/research-quality-filter"
11
- ---
12
-
13
- # Research Quality Filter
14
-
15
- A skill for systematically filtering and assessing the quality of research papers using structured appraisal criteria. Designed for researchers conducting literature reviews, systematic reviews, or evidence syntheses who need to triage large sets of candidate papers and evaluate the methodological rigor of included studies.
16
-
17
- ## Overview
18
-
19
- When conducting any form of literature review, researchers face two sequential challenges: first, reducing a large set of search results to a manageable set of relevant papers (screening), and second, assessing the methodological quality of those papers to determine how much weight to give their findings (appraisal). Both tasks are time-consuming and prone to inconsistency when performed ad hoc.
20
-
21
- This skill provides structured tools for both stages. For screening, it implements a two-pass protocol (title/abstract screening followed by full-text screening) with explicit inclusion/exclusion criteria. For quality appraisal, it provides instrument-specific checklists adapted from established frameworks (CASP, Newcastle-Ottawa, JBI, GRADE) that produce numerical quality scores and standardized assessments.
22
-
23
- ## Screening Protocol
24
-
25
- ### Two-Pass Screening
26
-
27
- ```
28
- Pass 1: Title and Abstract Screening
29
- For each paper, apply inclusion/exclusion criteria:
30
-
31
- INCLUDE if ALL of the following are met:
32
- □ Addresses the research question (at least tangentially)
33
- □ Published in a peer-reviewed venue (or recognized preprint server)
34
- □ Written in an included language (typically English)
35
- □ Published within the date range of interest
36
- □ Reports original research OR is a systematic review
37
-
38
- EXCLUDE if ANY of the following are met:
39
- □ Clearly off-topic (different population, intervention, or outcome)
40
- □ Wrong study type (e.g., editorial, letter, commentary if not included)
41
- □ Duplicate of another included paper
42
- □ Published in a known predatory journal
43
- □ Retracted
44
-
45
- Mark as UNCERTAIN if:
46
- □ Relevance cannot be determined from title/abstract alone
47
-
48
- Decision: INCLUDE | EXCLUDE | UNCERTAIN → move uncertain to Pass 2
49
-
50
- Pass 2: Full-Text Screening
51
- Read full text of INCLUDE and UNCERTAIN papers.
52
- Apply the same criteria with additional checks:
53
- □ Methods are described sufficiently
54
- □ Outcome of interest is actually measured/reported
55
- □ Sample/population matches inclusion criteria
56
- ```
57
-
58
- ### Screening Tracker
59
-
60
- ```python
61
- import pandas as pd
62
-
63
- def create_screening_tracker(papers: list, criteria: dict) -> pd.DataFrame:
64
- """
65
- Create a structured screening tracker for a set of candidate papers.
66
- """
67
- tracker = pd.DataFrame(papers)
68
-
69
- # Add screening columns
70
- tracker['pass1_decision'] = '' # include / exclude / uncertain
71
- tracker['pass1_reason'] = '' # reason for exclusion
72
- tracker['pass1_screener'] = '' # who screened
73
- tracker['pass2_decision'] = '' # include / exclude (for pass2 candidates)
74
- tracker['pass2_reason'] = ''
75
- tracker['quality_score'] = None # assigned after appraisal
76
-
77
- return tracker
78
-
79
- def calculate_screening_agreement(screener_a: list, screener_b: list) -> dict:
80
- """
81
- Calculate inter-rater agreement for dual screening.
82
- """
83
- from sklearn.metrics import cohen_kappa_score
84
- kappa = cohen_kappa_score(screener_a, screener_b)
85
- agreement_pct = sum(a == b for a, b in zip(screener_a, screener_b)) / len(screener_a) * 100
86
-
87
- return {
88
- 'cohens_kappa': round(kappa, 3),
89
- 'percent_agreement': round(agreement_pct, 1),
90
- 'interpretation': (
91
- 'almost perfect' if kappa > 0.81 else
92
- 'substantial' if kappa > 0.61 else
93
- 'moderate' if kappa > 0.41 else
94
- 'fair' if kappa > 0.21 else 'poor'
95
- ),
96
- 'disagreements': sum(a != b for a, b in zip(screener_a, screener_b))
97
- }
98
- ```
99
-
100
- ## Quality Appraisal Instruments
101
-
102
- ### Selecting the Right Tool
103
-
104
- | Study Design | Appraisal Tool | Items |
105
- |-------------|---------------|-------|
106
- | Randomized controlled trial | Cochrane Risk of Bias (RoB 2) | 5 domains |
107
- | Cohort study | Newcastle-Ottawa Scale (NOS) | 8 items |
108
- | Case-control study | Newcastle-Ottawa Scale (NOS) | 8 items |
109
- | Cross-sectional | JBI Checklist for Analytical Cross-Sectional | 8 items |
110
- | Qualitative study | CASP Qualitative Checklist | 10 items |
111
- | Systematic review | AMSTAR 2 | 16 items |
112
- | Diagnostic accuracy | QUADAS-2 | 4 domains |
113
- | Mixed methods | MMAT | 5 criteria per component |
114
-
115
- ### Generic Quality Assessment
116
-
117
- ```python
118
- def assess_paper_quality(paper: dict, study_type: str) -> dict:
119
- """
120
- Apply a structured quality assessment to a research paper.
121
- Returns scores on standardized criteria.
122
- """
123
- criteria = {
124
- 'research_question': {
125
- 'description': 'Is the research question clearly stated?',
126
- 'options': {'yes': 2, 'partially': 1, 'no': 0},
127
- 'score': None
128
- },
129
- 'study_design': {
130
- 'description': 'Is the study design appropriate for the question?',
131
- 'options': {'yes': 2, 'partially': 1, 'no': 0},
132
- 'score': None
133
- },
134
- 'sampling': {
135
- 'description': 'Is the sample adequate and representative?',
136
- 'options': {'yes': 2, 'partially': 1, 'no': 0},
137
- 'score': None
138
- },
139
- 'measurement': {
140
- 'description': 'Are outcome measures valid and reliable?',
141
- 'options': {'yes': 2, 'partially': 1, 'no': 0},
142
- 'score': None
143
- },
144
- 'analysis': {
145
- 'description': 'Is the statistical analysis appropriate?',
146
- 'options': {'yes': 2, 'partially': 1, 'no': 0},
147
- 'score': None
148
- },
149
- 'confounding': {
150
- 'description': 'Are potential confounders addressed?',
151
- 'options': {'yes': 2, 'partially': 1, 'no': 0},
152
- 'score': None
153
- },
154
- 'results_reporting': {
155
- 'description': 'Are results clearly and completely reported?',
156
- 'options': {'yes': 2, 'partially': 1, 'no': 0},
157
- 'score': None
158
- },
159
- 'limitations': {
160
- 'description': 'Are limitations honestly discussed?',
161
- 'options': {'yes': 2, 'partially': 1, 'no': 0},
162
- 'score': None
163
- }
164
- }
165
-
166
- # Calculate total score
167
- max_score = len(criteria) * 2
168
- total = sum(c['score'] for c in criteria.values() if c['score'] is not None)
169
- pct = total / max_score * 100
170
-
171
- quality_rating = (
172
- 'High' if pct >= 75 else
173
- 'Medium' if pct >= 50 else
174
- 'Low'
175
- )
176
-
177
- return {
178
- 'criteria': criteria,
179
- 'total_score': total,
180
- 'max_score': max_score,
181
- 'percentage': round(pct, 1),
182
- 'quality_rating': quality_rating
183
- }
184
- ```
185
-
186
- ## Evidence Grading with GRADE
187
-
188
- ### GRADE Framework Application
189
-
190
- The Grading of Recommendations, Assessment, Development and Evaluation (GRADE) framework rates the overall quality of evidence for each outcome:
191
-
192
- | Starting Level | Study Type | Initial Quality |
193
- |---------------|-----------|-----------------|
194
- | High | Randomized trials | +4 |
195
- | Low | Observational studies | +2 |
196
-
197
- **Factors that lower quality:**
198
-
199
- | Factor | When to Downgrade | Impact |
200
- |--------|------------------|--------|
201
- | Risk of bias | Serious methodological limitations | -1 or -2 |
202
- | Inconsistency | Unexplained heterogeneity across studies | -1 or -2 |
203
- | Indirectness | Population/intervention/outcome mismatch | -1 or -2 |
204
- | Imprecision | Wide confidence intervals, small samples | -1 or -2 |
205
- | Publication bias | Evidence of missing studies | -1 or -2 |
206
-
207
- **Factors that raise quality (observational studies only):**
208
-
209
- | Factor | When to Upgrade | Impact |
210
- |--------|----------------|--------|
211
- | Large effect | OR > 2 or OR < 0.5 consistently | +1 or +2 |
212
- | Dose-response | Clear gradient observed | +1 |
213
- | Confounders | Would reduce effect (strengthens finding) | +1 |
214
-
215
- ### Final GRADE Ratings
216
-
217
- | Rating | Meaning |
218
- |--------|---------|
219
- | **High** | Very confident the true effect is close to the estimate |
220
- | **Moderate** | Moderately confident; true effect likely close to estimate |
221
- | **Low** | Limited confidence; true effect may differ substantially |
222
- | **Very Low** | Very little confidence; true effect likely substantially different |
223
-
224
- ## PRISMA Flow Diagram
225
-
226
- ### Tracking the Filtering Process
227
-
228
- ```
229
- Identification:
230
- Records from databases: n = ____
231
- Records from other sources: n = ____
232
- Total records: n = ____
233
-
234
- Screening:
235
- Records after duplicates removed: n = ____
236
- Records screened (title/abstract): n = ____
237
- Records excluded at screening: n = ____ (reasons: ____)
238
-
239
- Eligibility:
240
- Full-text articles assessed: n = ____
241
- Full-text articles excluded: n = ____ (reasons: ____)
242
-
243
- Included:
244
- Studies in qualitative synthesis: n = ____
245
- Studies in quantitative synthesis (meta-analysis): n = ____
246
- ```
247
-
248
- ## Best Practices
249
-
250
- - Use dual screening (two independent reviewers) for systematic reviews; calculate inter-rater agreement.
251
- - Document exclusion reasons for every paper excluded at full-text stage.
252
- - Apply the same quality assessment tool consistently across all included studies.
253
- - Do not exclude studies based on quality alone; instead, perform sensitivity analysis with and without low-quality studies.
254
- - Present the PRISMA flow diagram in every systematic review to show the filtering process transparently.
255
- - Record screening decisions in a structured tracker, not in email threads or ad hoc notes.
256
-
257
- ## References
258
-
259
- - Moher, D., et al. (2009). Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. *BMJ*, 339, b2535.
260
- - Schunemann, H. J., et al. (2013). GRADE Handbook. *Cochrane Collaboration*.
261
- - Wells, G. A., et al. (2000). The Newcastle-Ottawa Scale (NOS) for Assessing the Quality of Nonrandomised Studies. *Ottawa Hospital Research Institute*.
@@ -1,110 +0,0 @@
1
- ---
2
- name: contextplus-mcp-guide
3
- description: "Semantic code search MCP with Tree-sitter AST and RAG"
4
- metadata:
5
- openclaw:
6
- emoji: "🌳"
7
- category: "tools"
8
- subcategory: "code-exec"
9
- keywords: ["semantic search", "code search", "Tree-sitter", "AST", "MCP", "RAG"]
10
- source: "https://github.com/ForLoopCodes/contextplus"
11
- ---
12
-
13
- # ContextPlus MCP Guide
14
-
15
- ## Overview
16
-
17
- ContextPlus is an MCP server that provides semantic code search using Tree-sitter AST parsing and RAG. It indexes codebases by extracting functions, classes, and modules as semantic units, embeds them for vector search, and serves results to any MCP-compatible LLM client. Helps AI agents understand large codebases by retrieving the most relevant code context.
18
-
19
- ## Installation
20
-
21
- ```bash
22
- npm install -g @contextplus/mcp-server
23
-
24
- # Or run directly
25
- npx @contextplus/mcp-server --workspace ./your-project
26
- ```
27
-
28
- ## MCP Configuration
29
-
30
- ```json
31
- {
32
- "mcpServers": {
33
- "contextplus": {
34
- "command": "npx",
35
- "args": ["@contextplus/mcp-server",
36
- "--workspace", "./project"],
37
- "env": {
38
- "EMBEDDING_MODEL": "all-MiniLM-L6-v2",
39
- "MAX_RESULTS": "10"
40
- }
41
- }
42
- }
43
- }
44
- ```
45
-
46
- ## Features
47
-
48
- ```markdown
49
- ### Semantic Understanding
50
- - **Tree-sitter parsing**: Extract functions, classes, types
51
- - **AST-aware chunking**: Split code at logical boundaries
52
- - **Cross-reference**: Track imports, calls, dependencies
53
- - **Multi-language**: Python, TypeScript, Go, Rust, Java, C++
54
-
55
- ### Search Capabilities
56
- - Natural language code search ("find auth middleware")
57
- - Symbol search (function/class by name)
58
- - Dependency graph ("what calls this function")
59
- - Semantic similarity (find similar implementations)
60
-
61
- ### MCP Tools Provided
62
- - `search_code(query)` — Semantic code search
63
- - `get_symbol(name)` — Get symbol definition + usages
64
- - `get_dependencies(file)` — File dependency graph
65
- - `get_context(file, line)` — Surrounding context for a location
66
- ```
67
-
68
- ## Usage Examples
69
-
70
- ```markdown
71
- ### In Claude Code / LLM Chat
72
-
73
- "Find all authentication-related functions"
74
- → Returns: auth middleware, login handler, token validation
75
-
76
- "What functions call the database connection pool?"
77
- → Returns: dependency graph with callers
78
-
79
- "Find code similar to this error handling pattern"
80
- → Returns: semantically similar try/catch blocks
81
- ```
82
-
83
- ## Indexing Configuration
84
-
85
- ```json
86
- {
87
- "indexing": {
88
- "include": ["src/**/*.ts", "lib/**/*.py"],
89
- "exclude": ["node_modules", "__pycache__", ".git"],
90
- "languages": ["typescript", "python"],
91
- "chunk_strategy": "ast",
92
- "max_chunk_tokens": 500,
93
- "rebuild_on_change": true
94
- }
95
- }
96
- ```
97
-
98
- ## Use Cases
99
-
100
- 1. **Code understanding**: Navigate unfamiliar codebases
101
- 2. **Agent context**: Provide relevant code to LLM agents
102
- 3. **Code review**: Find related patterns and similar code
103
- 4. **Refactoring**: Discover all usages before changing code
104
- 5. **Documentation**: Generate docs from code structure
105
-
106
- ## References
107
-
108
- - [ContextPlus GitHub](https://github.com/ForLoopCodes/contextplus)
109
- - [Tree-sitter](https://tree-sitter.github.io/tree-sitter/)
110
- - [MCP Specification](https://modelcontextprotocol.io/)