@wentorai/research-plugins 1.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/LICENSE +21 -0
- package/README.md +204 -0
- package/curated/analysis/README.md +64 -0
- package/curated/domains/README.md +104 -0
- package/curated/literature/README.md +53 -0
- package/curated/research/README.md +62 -0
- package/curated/tools/README.md +87 -0
- package/curated/writing/README.md +61 -0
- package/index.ts +39 -0
- package/mcp-configs/academic-db/ChatSpatial.json +17 -0
- package/mcp-configs/academic-db/academia-mcp.json +17 -0
- package/mcp-configs/academic-db/academic-paper-explorer.json +17 -0
- package/mcp-configs/academic-db/academic-search-mcp-server.json +17 -0
- package/mcp-configs/academic-db/agentinterviews-mcp.json +17 -0
- package/mcp-configs/academic-db/all-in-mcp.json +17 -0
- package/mcp-configs/academic-db/apple-health-mcp.json +17 -0
- package/mcp-configs/academic-db/arxiv-latex-mcp.json +17 -0
- package/mcp-configs/academic-db/arxiv-mcp-server.json +17 -0
- package/mcp-configs/academic-db/bgpt-mcp.json +17 -0
- package/mcp-configs/academic-db/biomcp.json +17 -0
- package/mcp-configs/academic-db/biothings-mcp.json +17 -0
- package/mcp-configs/academic-db/catalysishub-mcp-server.json +17 -0
- package/mcp-configs/academic-db/clinicaltrialsgov-mcp-server.json +17 -0
- package/mcp-configs/academic-db/deep-research-mcp.json +17 -0
- package/mcp-configs/academic-db/dicom-mcp.json +17 -0
- package/mcp-configs/academic-db/enrichr-mcp-server.json +17 -0
- package/mcp-configs/academic-db/fec-mcp-server.json +17 -0
- package/mcp-configs/academic-db/fhir-mcp-server-themomentum.json +17 -0
- package/mcp-configs/academic-db/fhir-mcp.json +19 -0
- package/mcp-configs/academic-db/gget-mcp.json +17 -0
- package/mcp-configs/academic-db/google-researcher-mcp.json +17 -0
- package/mcp-configs/academic-db/idea-reality-mcp.json +17 -0
- package/mcp-configs/academic-db/legiscan-mcp.json +19 -0
- package/mcp-configs/academic-db/lex.json +17 -0
- package/mcp-configs/ai-platform/Adaptive-Graph-of-Thoughts-MCP-server.json +17 -0
- package/mcp-configs/ai-platform/ai-counsel.json +17 -0
- package/mcp-configs/ai-platform/atlas-mcp-server.json +17 -0
- package/mcp-configs/ai-platform/counsel-mcp.json +17 -0
- package/mcp-configs/ai-platform/cross-llm-mcp.json +17 -0
- package/mcp-configs/ai-platform/gptr-mcp.json +17 -0
- package/mcp-configs/browser/decipher-research-agent.json +17 -0
- package/mcp-configs/browser/deep-research.json +17 -0
- package/mcp-configs/browser/everything-claude-code.json +17 -0
- package/mcp-configs/browser/gpt-researcher.json +17 -0
- package/mcp-configs/browser/heurist-agent-framework.json +17 -0
- package/mcp-configs/data-platform/4everland-hosting-mcp.json +17 -0
- package/mcp-configs/data-platform/context-keeper.json +17 -0
- package/mcp-configs/data-platform/context7.json +19 -0
- package/mcp-configs/data-platform/contextstream-mcp.json +17 -0
- package/mcp-configs/data-platform/email-mcp.json +17 -0
- package/mcp-configs/note-knowledge/ApeRAG.json +17 -0
- package/mcp-configs/note-knowledge/In-Memoria.json +17 -0
- package/mcp-configs/note-knowledge/agent-memory.json +17 -0
- package/mcp-configs/note-knowledge/aimemo.json +17 -0
- package/mcp-configs/note-knowledge/biel-mcp.json +19 -0
- package/mcp-configs/note-knowledge/cognee.json +17 -0
- package/mcp-configs/note-knowledge/context-awesome.json +17 -0
- package/mcp-configs/note-knowledge/context-mcp.json +17 -0
- package/mcp-configs/note-knowledge/conversation-handoff-mcp.json +17 -0
- package/mcp-configs/note-knowledge/cortex.json +17 -0
- package/mcp-configs/note-knowledge/devrag.json +17 -0
- package/mcp-configs/note-knowledge/easy-obsidian-mcp.json +17 -0
- package/mcp-configs/note-knowledge/engram.json +17 -0
- package/mcp-configs/note-knowledge/gnosis-mcp.json +17 -0
- package/mcp-configs/note-knowledge/graphlit-mcp-server.json +19 -0
- package/mcp-configs/reference-mgr/arxiv-cli.json +17 -0
- package/mcp-configs/reference-mgr/arxiv-search-mcp.json +17 -0
- package/mcp-configs/reference-mgr/chiken.json +17 -0
- package/mcp-configs/reference-mgr/claude-scholar.json +17 -0
- package/mcp-configs/reference-mgr/devonthink-mcp.json +17 -0
- package/mcp-configs/registry.json +447 -0
- package/openclaw.plugin.json +21 -0
- package/package.json +61 -0
- package/skills/analysis/dataviz/color-accessibility-guide/SKILL.md +230 -0
- package/skills/analysis/dataviz/geospatial-viz-guide/SKILL.md +218 -0
- package/skills/analysis/dataviz/interactive-viz-guide/SKILL.md +287 -0
- package/skills/analysis/dataviz/network-visualization-guide/SKILL.md +195 -0
- package/skills/analysis/dataviz/publication-figures-guide/SKILL.md +238 -0
- package/skills/analysis/dataviz/python-dataviz-guide/SKILL.md +195 -0
- package/skills/analysis/econometrics/causal-inference-guide/SKILL.md +197 -0
- package/skills/analysis/econometrics/iv-regression-guide/SKILL.md +198 -0
- package/skills/analysis/econometrics/panel-data-guide/SKILL.md +274 -0
- package/skills/analysis/econometrics/robustness-checks/SKILL.md +250 -0
- package/skills/analysis/econometrics/stata-regression/SKILL.md +117 -0
- package/skills/analysis/econometrics/time-series-guide/SKILL.md +235 -0
- package/skills/analysis/statistics/bayesian-statistics-guide/SKILL.md +221 -0
- package/skills/analysis/statistics/hypothesis-testing-guide/SKILL.md +210 -0
- package/skills/analysis/statistics/meta-analysis-guide/SKILL.md +206 -0
- package/skills/analysis/statistics/nonparametric-tests-guide/SKILL.md +221 -0
- package/skills/analysis/statistics/power-analysis-guide/SKILL.md +240 -0
- package/skills/analysis/statistics/sem-guide/SKILL.md +231 -0
- package/skills/analysis/statistics/survival-analysis-guide/SKILL.md +195 -0
- package/skills/analysis/wrangling/missing-data-handling/SKILL.md +224 -0
- package/skills/analysis/wrangling/pandas-data-wrangling/SKILL.md +242 -0
- package/skills/analysis/wrangling/questionnaire-design-guide/SKILL.md +234 -0
- package/skills/analysis/wrangling/text-mining-guide/SKILL.md +225 -0
- package/skills/domains/ai-ml/computer-vision-guide/SKILL.md +213 -0
- package/skills/domains/ai-ml/deep-learning-papers-guide/SKILL.md +200 -0
- package/skills/domains/ai-ml/llm-evaluation-guide/SKILL.md +194 -0
- package/skills/domains/ai-ml/prompt-engineering-research/SKILL.md +233 -0
- package/skills/domains/ai-ml/reinforcement-learning-guide/SKILL.md +254 -0
- package/skills/domains/ai-ml/transformer-architecture-guide/SKILL.md +233 -0
- package/skills/domains/biomedical/clinical-research-guide/SKILL.md +232 -0
- package/skills/domains/biomedical/clinicaltrials-api/SKILL.md +177 -0
- package/skills/domains/biomedical/epidemiology-guide/SKILL.md +200 -0
- package/skills/domains/biomedical/genomics-analysis-guide/SKILL.md +270 -0
- package/skills/domains/business/market-analysis-guide/SKILL.md +112 -0
- package/skills/domains/business/strategic-management-guide/SKILL.md +154 -0
- package/skills/domains/chemistry/computational-chemistry-guide/SKILL.md +266 -0
- package/skills/domains/chemistry/retrosynthesis-guide/SKILL.md +215 -0
- package/skills/domains/cs/algorithms-complexity-guide/SKILL.md +194 -0
- package/skills/domains/cs/dblp-api/SKILL.md +129 -0
- package/skills/domains/cs/software-engineering-research/SKILL.md +218 -0
- package/skills/domains/ecology/biodiversity-data-guide/SKILL.md +296 -0
- package/skills/domains/ecology/conservation-biology-guide/SKILL.md +198 -0
- package/skills/domains/ecology/gbif-api/SKILL.md +158 -0
- package/skills/domains/ecology/inaturalist-api/SKILL.md +173 -0
- package/skills/domains/economics/behavioral-economics-guide/SKILL.md +239 -0
- package/skills/domains/economics/development-economics-guide/SKILL.md +181 -0
- package/skills/domains/economics/fred-api/SKILL.md +189 -0
- package/skills/domains/education/curriculum-design-guide/SKILL.md +144 -0
- package/skills/domains/education/learning-science-guide/SKILL.md +150 -0
- package/skills/domains/finance/financial-data-analysis/SKILL.md +152 -0
- package/skills/domains/finance/quantitative-finance-guide/SKILL.md +151 -0
- package/skills/domains/geoscience/climate-science-guide/SKILL.md +158 -0
- package/skills/domains/geoscience/gis-remote-sensing-guide/SKILL.md +129 -0
- package/skills/domains/humanities/digital-humanities-guide/SKILL.md +181 -0
- package/skills/domains/humanities/philosophy-research-guide/SKILL.md +148 -0
- package/skills/domains/law/courtlistener-api/SKILL.md +213 -0
- package/skills/domains/law/legal-research-guide/SKILL.md +250 -0
- package/skills/domains/math/linear-algebra-applications/SKILL.md +227 -0
- package/skills/domains/math/numerical-methods-guide/SKILL.md +236 -0
- package/skills/domains/math/oeis-api/SKILL.md +158 -0
- package/skills/domains/pharma/clinical-pharmacology-guide/SKILL.md +165 -0
- package/skills/domains/pharma/drug-development-guide/SKILL.md +177 -0
- package/skills/domains/physics/computational-physics-guide/SKILL.md +300 -0
- package/skills/domains/physics/nasa-ads-api/SKILL.md +150 -0
- package/skills/domains/physics/quantum-computing-guide/SKILL.md +234 -0
- package/skills/domains/social-science/social-research-methods/SKILL.md +194 -0
- package/skills/domains/social-science/survey-research-guide/SKILL.md +182 -0
- package/skills/literature/discovery/citation-alert-guide/SKILL.md +154 -0
- package/skills/literature/discovery/conference-proceedings-guide/SKILL.md +142 -0
- package/skills/literature/discovery/literature-mapping-guide/SKILL.md +175 -0
- package/skills/literature/discovery/paper-tracking-guide/SKILL.md +211 -0
- package/skills/literature/discovery/rss-paper-feeds/SKILL.md +214 -0
- package/skills/literature/discovery/semantic-scholar-recs-guide/SKILL.md +164 -0
- package/skills/literature/fulltext/doaj-api/SKILL.md +120 -0
- package/skills/literature/fulltext/interlibrary-loan-guide/SKILL.md +163 -0
- package/skills/literature/fulltext/open-access-guide/SKILL.md +183 -0
- package/skills/literature/fulltext/pmc-oai-api/SKILL.md +184 -0
- package/skills/literature/fulltext/preprint-servers-guide/SKILL.md +128 -0
- package/skills/literature/fulltext/repository-harvesting-guide/SKILL.md +207 -0
- package/skills/literature/fulltext/unpaywall-api/SKILL.md +113 -0
- package/skills/literature/metadata/altmetrics-guide/SKILL.md +132 -0
- package/skills/literature/metadata/citation-network-guide/SKILL.md +236 -0
- package/skills/literature/metadata/crossref-api/SKILL.md +133 -0
- package/skills/literature/metadata/datacite-api/SKILL.md +126 -0
- package/skills/literature/metadata/doi-resolution-guide/SKILL.md +168 -0
- package/skills/literature/metadata/h-index-guide/SKILL.md +183 -0
- package/skills/literature/metadata/journal-metrics-guide/SKILL.md +188 -0
- package/skills/literature/metadata/opencitations-api/SKILL.md +128 -0
- package/skills/literature/metadata/orcid-api/SKILL.md +136 -0
- package/skills/literature/metadata/orcid-integration-guide/SKILL.md +178 -0
- package/skills/literature/search/arxiv-api/SKILL.md +95 -0
- package/skills/literature/search/biorxiv-api/SKILL.md +123 -0
- package/skills/literature/search/boolean-search-guide/SKILL.md +199 -0
- package/skills/literature/search/citation-chaining-guide/SKILL.md +148 -0
- package/skills/literature/search/database-comparison-guide/SKILL.md +100 -0
- package/skills/literature/search/europe-pmc-api/SKILL.md +120 -0
- package/skills/literature/search/google-scholar-guide/SKILL.md +182 -0
- package/skills/literature/search/mesh-terms-guide/SKILL.md +164 -0
- package/skills/literature/search/openalex-api/SKILL.md +134 -0
- package/skills/literature/search/pubmed-api/SKILL.md +130 -0
- package/skills/literature/search/scientify-literature-survey/SKILL.md +203 -0
- package/skills/literature/search/semantic-scholar-api/SKILL.md +134 -0
- package/skills/literature/search/systematic-search-strategy/SKILL.md +214 -0
- package/skills/research/automation/ai-scientist-guide/SKILL.md +228 -0
- package/skills/research/automation/data-collection-automation/SKILL.md +248 -0
- package/skills/research/automation/research-workflow-automation/SKILL.md +266 -0
- package/skills/research/deep-research/meta-synthesis-guide/SKILL.md +174 -0
- package/skills/research/deep-research/research-cog/SKILL.md +153 -0
- package/skills/research/deep-research/scoping-review-guide/SKILL.md +217 -0
- package/skills/research/deep-research/systematic-review-guide/SKILL.md +250 -0
- package/skills/research/funding/figshare-api/SKILL.md +163 -0
- package/skills/research/funding/grant-writing-guide/SKILL.md +233 -0
- package/skills/research/funding/nsf-grant-guide/SKILL.md +206 -0
- package/skills/research/funding/open-science-guide/SKILL.md +255 -0
- package/skills/research/funding/zenodo-api/SKILL.md +174 -0
- package/skills/research/methodology/action-research-guide/SKILL.md +201 -0
- package/skills/research/methodology/experimental-design-guide/SKILL.md +236 -0
- package/skills/research/methodology/grad-school-guide/SKILL.md +182 -0
- package/skills/research/methodology/grounded-theory-guide/SKILL.md +171 -0
- package/skills/research/methodology/mixed-methods-guide/SKILL.md +208 -0
- package/skills/research/methodology/qualitative-research-guide/SKILL.md +234 -0
- package/skills/research/methodology/scientify-idea-generation/SKILL.md +222 -0
- package/skills/research/paper-review/paper-reading-assistant/SKILL.md +266 -0
- package/skills/research/paper-review/peer-review-guide/SKILL.md +227 -0
- package/skills/research/paper-review/rebuttal-writing-guide/SKILL.md +185 -0
- package/skills/research/paper-review/scientify-write-review-paper/SKILL.md +209 -0
- package/skills/tools/code-exec/jupyter-notebook-guide/SKILL.md +178 -0
- package/skills/tools/code-exec/python-reproducibility-guide/SKILL.md +341 -0
- package/skills/tools/code-exec/r-reproducibility-guide/SKILL.md +236 -0
- package/skills/tools/code-exec/sandbox-execution-guide/SKILL.md +221 -0
- package/skills/tools/diagram/mermaid-diagram-guide/SKILL.md +269 -0
- package/skills/tools/diagram/plantuml-guide/SKILL.md +397 -0
- package/skills/tools/diagram/scientific-illustration-guide/SKILL.md +225 -0
- package/skills/tools/document/anystyle-api/SKILL.md +199 -0
- package/skills/tools/document/grobid-pdf-parsing/SKILL.md +294 -0
- package/skills/tools/document/markdown-academic-guide/SKILL.md +217 -0
- package/skills/tools/document/pdf-extraction-guide/SKILL.md +321 -0
- package/skills/tools/knowledge-graph/knowledge-graph-construction/SKILL.md +306 -0
- package/skills/tools/knowledge-graph/ontology-design-guide/SKILL.md +214 -0
- package/skills/tools/knowledge-graph/rag-methodology-guide/SKILL.md +325 -0
- package/skills/tools/ocr-translate/formula-recognition-guide/SKILL.md +367 -0
- package/skills/tools/ocr-translate/handwriting-recognition-guide/SKILL.md +211 -0
- package/skills/tools/ocr-translate/latex-ocr-guide/SKILL.md +204 -0
- package/skills/tools/ocr-translate/multilingual-research-guide/SKILL.md +234 -0
- package/skills/tools/scraping/academic-web-scraping/SKILL.md +326 -0
- package/skills/tools/scraping/api-data-collection-guide/SKILL.md +301 -0
- package/skills/tools/scraping/web-scraping-ethics-guide/SKILL.md +250 -0
- package/skills/writing/citation/bibtex-management-guide/SKILL.md +246 -0
- package/skills/writing/citation/citation-style-guide/SKILL.md +248 -0
- package/skills/writing/citation/reference-manager-comparison/SKILL.md +208 -0
- package/skills/writing/citation/zotero-api/SKILL.md +188 -0
- package/skills/writing/composition/abstract-writing-guide/SKILL.md +188 -0
- package/skills/writing/composition/discussion-writing-guide/SKILL.md +194 -0
- package/skills/writing/composition/introduction-writing-guide/SKILL.md +194 -0
- package/skills/writing/composition/literature-review-writing/SKILL.md +196 -0
- package/skills/writing/composition/methods-section-guide/SKILL.md +185 -0
- package/skills/writing/composition/response-to-reviewers/SKILL.md +215 -0
- package/skills/writing/composition/scientific-writing-guide/SKILL.md +152 -0
- package/skills/writing/latex/bibliography-management-guide/SKILL.md +206 -0
- package/skills/writing/latex/latex-drawing-guide/SKILL.md +234 -0
- package/skills/writing/latex/latex-ecosystem-guide/SKILL.md +240 -0
- package/skills/writing/latex/math-typesetting-guide/SKILL.md +231 -0
- package/skills/writing/latex/overleaf-collaboration-guide/SKILL.md +211 -0
- package/skills/writing/latex/tikz-diagrams-guide/SKILL.md +211 -0
- package/skills/writing/polish/academic-translation-guide/SKILL.md +175 -0
- package/skills/writing/polish/academic-writing-refiner/SKILL.md +143 -0
- package/skills/writing/polish/ai-writing-humanizer/SKILL.md +178 -0
- package/skills/writing/polish/grammar-checker-guide/SKILL.md +184 -0
- package/skills/writing/polish/plagiarism-detection-guide/SKILL.md +167 -0
- package/skills/writing/templates/beamer-presentation-guide/SKILL.md +263 -0
- package/skills/writing/templates/conference-paper-template/SKILL.md +219 -0
- package/skills/writing/templates/thesis-template-guide/SKILL.md +200 -0
- package/skills/writing/templates/thesis-writing-guide/SKILL.md +220 -0
- package/src/tools/arxiv.ts +131 -0
- package/src/tools/crossref.ts +112 -0
- package/src/tools/openalex.ts +174 -0
- package/src/tools/pubmed.ts +166 -0
- package/src/tools/semantic-scholar.ts +108 -0
- package/src/tools/unpaywall.ts +58 -0
|
@@ -0,0 +1,248 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: data-collection-automation
|
|
3
|
+
description: "Automate survey deployment, data collection, and pipeline management"
|
|
4
|
+
metadata:
|
|
5
|
+
openclaw:
|
|
6
|
+
emoji: "robot"
|
|
7
|
+
category: "research"
|
|
8
|
+
subcategory: "automation"
|
|
9
|
+
keywords: ["data collection", "survey automation", "pipeline", "Qualtrics API", "research automation", "ETL"]
|
|
10
|
+
source: "wentor-research-plugins"
|
|
11
|
+
---
|
|
12
|
+
|
|
13
|
+
# Data Collection Automation Guide
|
|
14
|
+
|
|
15
|
+
A skill for automating research data collection, survey deployment, and data pipeline management. Covers survey platform APIs, automated data retrieval, quality checks, ETL pipelines, and scheduling for longitudinal studies.
|
|
16
|
+
|
|
17
|
+
## Survey Platform APIs
|
|
18
|
+
|
|
19
|
+
### Qualtrics API
|
|
20
|
+
|
|
21
|
+
```python
|
|
22
|
+
import os
|
|
23
|
+
import json
|
|
24
|
+
import urllib.request
|
|
25
|
+
import time
|
|
26
|
+
|
|
27
|
+
|
|
28
|
+
def export_qualtrics_responses(survey_id: str,
|
|
29
|
+
file_format: str = "csv") -> str:
|
|
30
|
+
"""
|
|
31
|
+
Export survey responses from Qualtrics via API.
|
|
32
|
+
|
|
33
|
+
Args:
|
|
34
|
+
survey_id: The Qualtrics survey ID (SV_...)
|
|
35
|
+
file_format: Export format (csv, json, spss)
|
|
36
|
+
"""
|
|
37
|
+
api_token = os.environ["QUALTRICS_API_TOKEN"]
|
|
38
|
+
data_center = os.environ["QUALTRICS_DATACENTER"]
|
|
39
|
+
base_url = f"https://{data_center}.qualtrics.com/API/v3"
|
|
40
|
+
|
|
41
|
+
headers = {
|
|
42
|
+
"X-API-TOKEN": api_token,
|
|
43
|
+
"Content-Type": "application/json"
|
|
44
|
+
}
|
|
45
|
+
|
|
46
|
+
# Step 1: Start export
|
|
47
|
+
export_data = json.dumps({
|
|
48
|
+
"format": file_format,
|
|
49
|
+
"compress": False
|
|
50
|
+
}).encode("utf-8")
|
|
51
|
+
|
|
52
|
+
req = urllib.request.Request(
|
|
53
|
+
f"{base_url}/surveys/{survey_id}/export-responses",
|
|
54
|
+
data=export_data,
|
|
55
|
+
headers=headers
|
|
56
|
+
)
|
|
57
|
+
response = json.loads(urllib.request.urlopen(req).read())
|
|
58
|
+
progress_id = response["result"]["progressId"]
|
|
59
|
+
|
|
60
|
+
# Step 2: Poll for completion
|
|
61
|
+
status = "inProgress"
|
|
62
|
+
while status == "inProgress":
|
|
63
|
+
time.sleep(2)
|
|
64
|
+
req = urllib.request.Request(
|
|
65
|
+
f"{base_url}/surveys/{survey_id}/export-responses/{progress_id}",
|
|
66
|
+
headers=headers
|
|
67
|
+
)
|
|
68
|
+
check = json.loads(urllib.request.urlopen(req).read())
|
|
69
|
+
status = check["result"]["status"]
|
|
70
|
+
|
|
71
|
+
file_id = check["result"]["fileId"]
|
|
72
|
+
|
|
73
|
+
# Step 3: Download file
|
|
74
|
+
req = urllib.request.Request(
|
|
75
|
+
f"{base_url}/surveys/{survey_id}/export-responses/{file_id}/file",
|
|
76
|
+
headers=headers
|
|
77
|
+
)
|
|
78
|
+
file_data = urllib.request.urlopen(req).read()
|
|
79
|
+
|
|
80
|
+
output_path = f"responses_{survey_id}.{file_format}"
|
|
81
|
+
with open(output_path, "wb") as f:
|
|
82
|
+
f.write(file_data)
|
|
83
|
+
|
|
84
|
+
return output_path
|
|
85
|
+
```
|
|
86
|
+
|
|
87
|
+
### REDCap API
|
|
88
|
+
|
|
89
|
+
```python
|
|
90
|
+
def export_redcap_records(api_url: str, fields: list[str] = None) -> list:
|
|
91
|
+
"""
|
|
92
|
+
Export records from a REDCap project.
|
|
93
|
+
|
|
94
|
+
Args:
|
|
95
|
+
api_url: REDCap API endpoint URL
|
|
96
|
+
fields: List of field names to export (None = all fields)
|
|
97
|
+
"""
|
|
98
|
+
api_token = os.environ["REDCAP_API_TOKEN"]
|
|
99
|
+
|
|
100
|
+
data = {
|
|
101
|
+
"token": api_token,
|
|
102
|
+
"content": "record",
|
|
103
|
+
"format": "json",
|
|
104
|
+
"type": "flat"
|
|
105
|
+
}
|
|
106
|
+
|
|
107
|
+
if fields:
|
|
108
|
+
data["fields"] = ",".join(fields)
|
|
109
|
+
|
|
110
|
+
encoded = urllib.parse.urlencode(data).encode("utf-8")
|
|
111
|
+
req = urllib.request.Request(api_url, data=encoded)
|
|
112
|
+
response = urllib.request.urlopen(req)
|
|
113
|
+
|
|
114
|
+
return json.loads(response.read())
|
|
115
|
+
```
|
|
116
|
+
|
|
117
|
+
## Automated Data Quality Checks
|
|
118
|
+
|
|
119
|
+
### Validation Pipeline
|
|
120
|
+
|
|
121
|
+
```python
|
|
122
|
+
import pandas as pd
|
|
123
|
+
from datetime import datetime
|
|
124
|
+
|
|
125
|
+
|
|
126
|
+
def validate_survey_data(df: pd.DataFrame,
|
|
127
|
+
rules: dict) -> dict:
|
|
128
|
+
"""
|
|
129
|
+
Run automated data quality checks on collected data.
|
|
130
|
+
|
|
131
|
+
Args:
|
|
132
|
+
df: DataFrame of survey responses
|
|
133
|
+
rules: Dict of column -> validation rule pairs
|
|
134
|
+
"""
|
|
135
|
+
issues = []
|
|
136
|
+
|
|
137
|
+
# Check for duplicates
|
|
138
|
+
dupes = df.duplicated(subset=["respondent_id"]).sum()
|
|
139
|
+
if dupes > 0:
|
|
140
|
+
issues.append(f"Found {dupes} duplicate respondent IDs")
|
|
141
|
+
|
|
142
|
+
# Check completion rates
|
|
143
|
+
completion = df.notna().mean()
|
|
144
|
+
low_completion = completion[completion < 0.5]
|
|
145
|
+
for col in low_completion.index:
|
|
146
|
+
issues.append(f"Column '{col}' has {low_completion[col]:.0%} completion")
|
|
147
|
+
|
|
148
|
+
# Check value ranges
|
|
149
|
+
for col, rule in rules.items():
|
|
150
|
+
if col not in df.columns:
|
|
151
|
+
continue
|
|
152
|
+
if "min" in rule:
|
|
153
|
+
violations = (df[col] < rule["min"]).sum()
|
|
154
|
+
if violations > 0:
|
|
155
|
+
issues.append(f"{violations} values below minimum in '{col}'")
|
|
156
|
+
if "max" in rule:
|
|
157
|
+
violations = (df[col] > rule["max"]).sum()
|
|
158
|
+
if violations > 0:
|
|
159
|
+
issues.append(f"{violations} values above maximum in '{col}'")
|
|
160
|
+
|
|
161
|
+
# Check for speeding (unusually fast completion)
|
|
162
|
+
if "duration_seconds" in df.columns:
|
|
163
|
+
median_time = df["duration_seconds"].median()
|
|
164
|
+
speeders = (df["duration_seconds"] < median_time * 0.3).sum()
|
|
165
|
+
if speeders > 0:
|
|
166
|
+
issues.append(f"{speeders} respondents completed in <30% of median time")
|
|
167
|
+
|
|
168
|
+
return {
|
|
169
|
+
"n_records": len(df),
|
|
170
|
+
"n_issues": len(issues),
|
|
171
|
+
"issues": issues,
|
|
172
|
+
"timestamp": datetime.now().isoformat()
|
|
173
|
+
}
|
|
174
|
+
```
|
|
175
|
+
|
|
176
|
+
## ETL Pipeline for Research Data
|
|
177
|
+
|
|
178
|
+
### Scheduled Data Retrieval
|
|
179
|
+
|
|
180
|
+
```python
|
|
181
|
+
def research_etl_pipeline(sources: list[dict],
|
|
182
|
+
output_dir: str) -> dict:
|
|
183
|
+
"""
|
|
184
|
+
Extract, transform, and load research data from multiple sources.
|
|
185
|
+
|
|
186
|
+
Args:
|
|
187
|
+
sources: List of data source configurations
|
|
188
|
+
output_dir: Directory to save processed data
|
|
189
|
+
"""
|
|
190
|
+
results = {}
|
|
191
|
+
|
|
192
|
+
for source in sources:
|
|
193
|
+
name = source["name"]
|
|
194
|
+
|
|
195
|
+
# Extract
|
|
196
|
+
if source["type"] == "qualtrics":
|
|
197
|
+
raw_path = export_qualtrics_responses(source["survey_id"])
|
|
198
|
+
df = pd.read_csv(raw_path)
|
|
199
|
+
elif source["type"] == "redcap":
|
|
200
|
+
records = export_redcap_records(source["api_url"])
|
|
201
|
+
df = pd.DataFrame(records)
|
|
202
|
+
elif source["type"] == "csv_url":
|
|
203
|
+
df = pd.read_csv(source["url"])
|
|
204
|
+
else:
|
|
205
|
+
continue
|
|
206
|
+
|
|
207
|
+
# Transform
|
|
208
|
+
df = df.dropna(how="all")
|
|
209
|
+
df.columns = [c.strip().lower().replace(" ", "_") for c in df.columns]
|
|
210
|
+
|
|
211
|
+
# Load
|
|
212
|
+
timestamp = datetime.now().strftime("%Y%m%d")
|
|
213
|
+
output_path = f"{output_dir}/{name}_{timestamp}.csv"
|
|
214
|
+
df.to_csv(output_path, index=False)
|
|
215
|
+
|
|
216
|
+
results[name] = {
|
|
217
|
+
"records": len(df),
|
|
218
|
+
"columns": len(df.columns),
|
|
219
|
+
"output": output_path
|
|
220
|
+
}
|
|
221
|
+
|
|
222
|
+
return results
|
|
223
|
+
```
|
|
224
|
+
|
|
225
|
+
## Scheduling and Monitoring
|
|
226
|
+
|
|
227
|
+
### Cron-Based Scheduling
|
|
228
|
+
|
|
229
|
+
```bash
|
|
230
|
+
# Run data collection pipeline daily at 6 AM
|
|
231
|
+
# crontab -e
|
|
232
|
+
0 6 * * * cd /path/to/project && python collect_data.py >> logs/collection.log 2>&1
|
|
233
|
+
```
|
|
234
|
+
|
|
235
|
+
### Monitoring Checklist
|
|
236
|
+
|
|
237
|
+
```
|
|
238
|
+
For longitudinal studies, automate monitoring of:
|
|
239
|
+
- Response rates per wave (alert if below threshold)
|
|
240
|
+
- Data quality metrics (completion, speeding, straight-lining)
|
|
241
|
+
- API quota usage (stay within rate limits)
|
|
242
|
+
- Storage usage and backup status
|
|
243
|
+
- Participant dropout patterns
|
|
244
|
+
```
|
|
245
|
+
|
|
246
|
+
## Ethical Considerations
|
|
247
|
+
|
|
248
|
+
Always ensure automated data collection complies with your IRB/ethics board approval. Store API tokens securely using environment variables, never in code. Implement data encryption at rest. Log all data access for audit trails. Respect rate limits on external APIs. Include automated checks for consent status before processing participant data.
|
|
@@ -0,0 +1,266 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: research-workflow-automation
|
|
3
|
+
description: "Automate repetitive research tasks with pipelines, schedulers, and scripting"
|
|
4
|
+
metadata:
|
|
5
|
+
openclaw:
|
|
6
|
+
emoji: "gear"
|
|
7
|
+
category: "research"
|
|
8
|
+
subcategory: "automation"
|
|
9
|
+
keywords: ["workflow management", "pipeline scheduler", "research automation", "scientific workflow", "task automation"]
|
|
10
|
+
source: "wentor"
|
|
11
|
+
---
|
|
12
|
+
|
|
13
|
+
# Research Workflow Automation
|
|
14
|
+
|
|
15
|
+
A skill for automating repetitive research tasks using workflow managers, pipeline tools, and scripting. Covers data pipeline design, experiment tracking, automated reporting, and reproducible research workflows.
|
|
16
|
+
|
|
17
|
+
## Workflow Management Tools
|
|
18
|
+
|
|
19
|
+
### Tool Comparison
|
|
20
|
+
|
|
21
|
+
| Tool | Language | Best For | Complexity | License |
|
|
22
|
+
|------|----------|----------|-----------|---------|
|
|
23
|
+
| Snakemake | Python | Bioinformatics, data pipelines | Medium | MIT |
|
|
24
|
+
| Nextflow | Groovy/DSL | Genomics, HPC | Medium | Apache 2.0 |
|
|
25
|
+
| Prefect | Python | Data engineering, ML | Medium | Apache 2.0 |
|
|
26
|
+
| Airflow | Python | Scheduled ETL pipelines | High | Apache 2.0 |
|
|
27
|
+
| Make | Makefile | Simple file-based pipelines | Low | GPL |
|
|
28
|
+
| DVC | YAML/CLI | ML experiment tracking | Low | Apache 2.0 |
|
|
29
|
+
|
|
30
|
+
### Snakemake: Scientific Workflow Example
|
|
31
|
+
|
|
32
|
+
```python
|
|
33
|
+
# Snakefile for a research data pipeline
|
|
34
|
+
|
|
35
|
+
# Configuration
|
|
36
|
+
configfile: "config.yaml"
|
|
37
|
+
|
|
38
|
+
# Define the final outputs
|
|
39
|
+
rule all:
|
|
40
|
+
input:
|
|
41
|
+
"results/figures/main_figure.pdf",
|
|
42
|
+
"results/tables/summary_table.csv",
|
|
43
|
+
"results/manuscript_stats.json"
|
|
44
|
+
|
|
45
|
+
# Step 1: Download and preprocess data
|
|
46
|
+
rule download_data:
|
|
47
|
+
output:
|
|
48
|
+
"data/raw/{dataset}.csv"
|
|
49
|
+
params:
|
|
50
|
+
url = lambda wildcards: config["datasets"][wildcards.dataset]["url"]
|
|
51
|
+
shell:
|
|
52
|
+
"curl -L {params.url} -o {output}"
|
|
53
|
+
|
|
54
|
+
rule clean_data:
|
|
55
|
+
input:
|
|
56
|
+
"data/raw/{dataset}.csv"
|
|
57
|
+
output:
|
|
58
|
+
"data/cleaned/{dataset}.parquet"
|
|
59
|
+
script:
|
|
60
|
+
"scripts/clean_data.py"
|
|
61
|
+
|
|
62
|
+
# Step 2: Run analysis
|
|
63
|
+
rule statistical_analysis:
|
|
64
|
+
input:
|
|
65
|
+
expand("data/cleaned/{dataset}.parquet",
|
|
66
|
+
dataset=config["datasets"].keys())
|
|
67
|
+
output:
|
|
68
|
+
"results/analysis/statistics.json",
|
|
69
|
+
"results/analysis/model_fits.pkl"
|
|
70
|
+
threads: 4
|
|
71
|
+
resources:
|
|
72
|
+
mem_mb = 8000
|
|
73
|
+
script:
|
|
74
|
+
"scripts/run_analysis.py"
|
|
75
|
+
|
|
76
|
+
# Step 3: Generate figures
|
|
77
|
+
rule create_figures:
|
|
78
|
+
input:
|
|
79
|
+
"results/analysis/statistics.json"
|
|
80
|
+
output:
|
|
81
|
+
"results/figures/main_figure.pdf"
|
|
82
|
+
script:
|
|
83
|
+
"scripts/create_figures.py"
|
|
84
|
+
|
|
85
|
+
# Step 4: Generate summary table
|
|
86
|
+
rule summary_table:
|
|
87
|
+
input:
|
|
88
|
+
"results/analysis/statistics.json"
|
|
89
|
+
output:
|
|
90
|
+
"results/tables/summary_table.csv"
|
|
91
|
+
script:
|
|
92
|
+
"scripts/create_tables.py"
|
|
93
|
+
```
|
|
94
|
+
|
|
95
|
+
```bash
|
|
96
|
+
# Execute the full pipeline
|
|
97
|
+
snakemake --cores 8 --use-conda
|
|
98
|
+
|
|
99
|
+
# Visualize the workflow DAG
|
|
100
|
+
snakemake --dag | dot -Tpdf > workflow.pdf
|
|
101
|
+
|
|
102
|
+
# Dry run to see what would be executed
|
|
103
|
+
snakemake -n
|
|
104
|
+
```
|
|
105
|
+
|
|
106
|
+
## Make-Based Pipelines
|
|
107
|
+
|
|
108
|
+
### Simple Makefile for Research
|
|
109
|
+
|
|
110
|
+
```makefile
|
|
111
|
+
# Makefile for a research project
|
|
112
|
+
.PHONY: all clean data analysis figures paper
|
|
113
|
+
|
|
114
|
+
# Default target
|
|
115
|
+
all: paper
|
|
116
|
+
|
|
117
|
+
# Data acquisition and cleaning
|
|
118
|
+
data/cleaned/dataset.parquet: data/raw/dataset.csv scripts/clean.py
|
|
119
|
+
python scripts/clean.py --input $< --output $@
|
|
120
|
+
|
|
121
|
+
# Analysis
|
|
122
|
+
results/statistics.json: data/cleaned/dataset.parquet scripts/analyze.py
|
|
123
|
+
python scripts/analyze.py --input $< --output $@
|
|
124
|
+
|
|
125
|
+
# Figures
|
|
126
|
+
results/figures/%.pdf: results/statistics.json scripts/plot_%.py
|
|
127
|
+
python scripts/plot_$*.py --input $< --output $@
|
|
128
|
+
|
|
129
|
+
# Compile paper
|
|
130
|
+
paper: results/figures/main.pdf results/figures/supplement.pdf
|
|
131
|
+
cd paper && latexmk -pdf main.tex
|
|
132
|
+
|
|
133
|
+
# Clean all generated files
|
|
134
|
+
clean:
|
|
135
|
+
rm -rf data/cleaned/ results/ paper/*.pdf paper/*.aux paper/*.log
|
|
136
|
+
```
|
|
137
|
+
|
|
138
|
+
## Experiment Tracking
|
|
139
|
+
|
|
140
|
+
### MLflow for Research Experiments
|
|
141
|
+
|
|
142
|
+
```python
|
|
143
|
+
import mlflow
|
|
144
|
+
import json
|
|
145
|
+
|
|
146
|
+
def track_experiment(experiment_name: str, params: dict,
|
|
147
|
+
metrics: dict, artifacts: list[str] = None):
|
|
148
|
+
"""
|
|
149
|
+
Track a research experiment with MLflow.
|
|
150
|
+
|
|
151
|
+
Args:
|
|
152
|
+
experiment_name: Name of the experiment series
|
|
153
|
+
params: Hyperparameters or configuration
|
|
154
|
+
metrics: Results metrics
|
|
155
|
+
artifacts: Paths to output files to log
|
|
156
|
+
"""
|
|
157
|
+
mlflow.set_experiment(experiment_name)
|
|
158
|
+
|
|
159
|
+
with mlflow.start_run():
|
|
160
|
+
# Log parameters
|
|
161
|
+
for key, value in params.items():
|
|
162
|
+
mlflow.log_param(key, value)
|
|
163
|
+
|
|
164
|
+
# Log metrics
|
|
165
|
+
for key, value in metrics.items():
|
|
166
|
+
mlflow.log_metric(key, value)
|
|
167
|
+
|
|
168
|
+
# Log artifacts (figures, data files, etc.)
|
|
169
|
+
if artifacts:
|
|
170
|
+
for artifact_path in artifacts:
|
|
171
|
+
mlflow.log_artifact(artifact_path)
|
|
172
|
+
|
|
173
|
+
# Log the full configuration as JSON
|
|
174
|
+
mlflow.log_dict(params, "config.json")
|
|
175
|
+
|
|
176
|
+
run_id = mlflow.active_run().info.run_id
|
|
177
|
+
print(f"Experiment logged: {run_id}")
|
|
178
|
+
return run_id
|
|
179
|
+
|
|
180
|
+
# Example: track a statistical analysis
|
|
181
|
+
track_experiment(
|
|
182
|
+
experiment_name="treatment_effect_study",
|
|
183
|
+
params={
|
|
184
|
+
'model': 'linear_regression',
|
|
185
|
+
'covariates': 'age,sex,baseline_score',
|
|
186
|
+
'alpha': 0.05,
|
|
187
|
+
'data_version': 'v2.3'
|
|
188
|
+
},
|
|
189
|
+
metrics={
|
|
190
|
+
'r_squared': 0.42,
|
|
191
|
+
'treatment_effect': 0.35,
|
|
192
|
+
'p_value': 0.003,
|
|
193
|
+
'n_subjects': 245
|
|
194
|
+
},
|
|
195
|
+
artifacts=['results/figures/main.pdf']
|
|
196
|
+
)
|
|
197
|
+
```
|
|
198
|
+
|
|
199
|
+
## Automated Reporting
|
|
200
|
+
|
|
201
|
+
### Generate Reports from Analysis Results
|
|
202
|
+
|
|
203
|
+
```python
|
|
204
|
+
from jinja2 import Template
|
|
205
|
+
from datetime import datetime
|
|
206
|
+
|
|
207
|
+
def generate_report(results: dict, template_path: str,
|
|
208
|
+
output_path: str):
|
|
209
|
+
"""
|
|
210
|
+
Auto-generate a research report from analysis results.
|
|
211
|
+
"""
|
|
212
|
+
report_template = Template("""
|
|
213
|
+
# Analysis Report
|
|
214
|
+
Generated: {{ timestamp }}
|
|
215
|
+
|
|
216
|
+
## Summary Statistics
|
|
217
|
+
- Sample size: {{ results.n }}
|
|
218
|
+
- Mean outcome: {{ "%.2f"|format(results.mean) }}
|
|
219
|
+
- Standard deviation: {{ "%.2f"|format(results.std) }}
|
|
220
|
+
|
|
221
|
+
## Main Results
|
|
222
|
+
- Treatment effect: {{ "%.3f"|format(results.effect) }}
|
|
223
|
+
(95% CI: {{ "%.3f"|format(results.ci_lower) }} to {{ "%.3f"|format(results.ci_upper) }})
|
|
224
|
+
- p-value: {{ "%.4f"|format(results.p_value) }}
|
|
225
|
+
- Effect size (Cohen's d): {{ "%.2f"|format(results.cohens_d) }}
|
|
226
|
+
|
|
227
|
+
## Interpretation
|
|
228
|
+
{% if results.p_value < 0.05 %}
|
|
229
|
+
The treatment effect is statistically significant at the 5% level.
|
|
230
|
+
{% else %}
|
|
231
|
+
The treatment effect is not statistically significant at the 5% level.
|
|
232
|
+
{% endif %}
|
|
233
|
+
""")
|
|
234
|
+
|
|
235
|
+
report = report_template.render(
|
|
236
|
+
results=results,
|
|
237
|
+
timestamp=datetime.now().strftime('%Y-%m-%d %H:%M')
|
|
238
|
+
)
|
|
239
|
+
|
|
240
|
+
with open(output_path, 'w') as f:
|
|
241
|
+
f.write(report)
|
|
242
|
+
|
|
243
|
+
return output_path
|
|
244
|
+
```
|
|
245
|
+
|
|
246
|
+
## Scheduling and Cron Jobs
|
|
247
|
+
|
|
248
|
+
### Automated Data Collection
|
|
249
|
+
|
|
250
|
+
```bash
|
|
251
|
+
# Crontab entry: run daily at 6 AM
|
|
252
|
+
0 6 * * * cd /home/researcher/project && python scripts/daily_data_fetch.py >> logs/fetch.log 2>&1
|
|
253
|
+
|
|
254
|
+
# Weekly analysis update (every Monday at 9 AM)
|
|
255
|
+
0 9 * * 1 cd /home/researcher/project && snakemake --cores 4 >> logs/pipeline.log 2>&1
|
|
256
|
+
```
|
|
257
|
+
|
|
258
|
+
## Best Practices
|
|
259
|
+
|
|
260
|
+
1. **Version everything**: Code, data, configurations, and environments
|
|
261
|
+
2. **Idempotent pipelines**: Running the same pipeline twice produces the same output
|
|
262
|
+
3. **Fail fast**: Validate inputs early; do not process bad data silently
|
|
263
|
+
4. **Log everything**: Record timestamps, parameters, and random seeds
|
|
264
|
+
5. **Separate configuration from code**: Use YAML/JSON config files, not hardcoded values
|
|
265
|
+
6. **Test with small data first**: Use a 1% sample to verify the pipeline before full runs
|
|
266
|
+
7. **Document the workflow**: A README explaining how to run the full pipeline from scratch
|
|
@@ -0,0 +1,174 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: meta-synthesis-guide
|
|
3
|
+
description: "Conduct qualitative meta-synthesis and evidence synthesis methods"
|
|
4
|
+
metadata:
|
|
5
|
+
openclaw:
|
|
6
|
+
emoji: "chains"
|
|
7
|
+
category: "research"
|
|
8
|
+
subcategory: "deep-research"
|
|
9
|
+
keywords: ["meta-synthesis", "qualitative evidence synthesis", "meta-ethnography", "thematic synthesis", "systematic review"]
|
|
10
|
+
source: "wentor-research-plugins"
|
|
11
|
+
---
|
|
12
|
+
|
|
13
|
+
# Meta-Synthesis Guide
|
|
14
|
+
|
|
15
|
+
A skill for conducting qualitative meta-synthesis -- the systematic integration of findings across multiple qualitative studies. Covers meta-ethnography (Noblit & Hare), thematic synthesis (Thomas & Harden), framework synthesis, and quality appraisal of qualitative studies.
|
|
16
|
+
|
|
17
|
+
## What Is Qualitative Meta-Synthesis?
|
|
18
|
+
|
|
19
|
+
### Overview
|
|
20
|
+
|
|
21
|
+
```
|
|
22
|
+
Meta-synthesis is to qualitative research what meta-analysis
|
|
23
|
+
is to quantitative research -- it systematically combines
|
|
24
|
+
findings from multiple studies to produce higher-order
|
|
25
|
+
interpretations.
|
|
26
|
+
|
|
27
|
+
Key differences from meta-analysis:
|
|
28
|
+
- Interpretive, not statistical aggregation
|
|
29
|
+
- Aims to generate new understanding, not average effect sizes
|
|
30
|
+
- Synthesizes themes, concepts, and metaphors across studies
|
|
31
|
+
- Product is a new interpretation, not a pooled statistic
|
|
32
|
+
```
|
|
33
|
+
|
|
34
|
+
### When to Use Meta-Synthesis
|
|
35
|
+
|
|
36
|
+
```
|
|
37
|
+
Appropriate when:
|
|
38
|
+
- Multiple qualitative studies exist on a topic
|
|
39
|
+
- You want to build theory or deepen understanding
|
|
40
|
+
- Individual studies have limited scope but collectively cover a phenomenon
|
|
41
|
+
- Policy or practice needs an integrated evidence base from qualitative work
|
|
42
|
+
|
|
43
|
+
Not appropriate when:
|
|
44
|
+
- Studies are too heterogeneous in topic to meaningfully compare
|
|
45
|
+
- Fewer than 3 qualitative studies exist
|
|
46
|
+
- The goal is to measure effect sizes (use meta-analysis instead)
|
|
47
|
+
```
|
|
48
|
+
|
|
49
|
+
## Meta-Ethnography (Noblit & Hare)
|
|
50
|
+
|
|
51
|
+
### Seven-Step Process
|
|
52
|
+
|
|
53
|
+
```python
|
|
54
|
+
def meta_ethnography_steps() -> dict:
|
|
55
|
+
"""
|
|
56
|
+
The seven steps of meta-ethnography (Noblit & Hare, 1988).
|
|
57
|
+
"""
|
|
58
|
+
return {
|
|
59
|
+
"step_1_getting_started": {
|
|
60
|
+
"description": "Identify the research question and intellectual interest",
|
|
61
|
+
"output": "Clear synthesis question"
|
|
62
|
+
},
|
|
63
|
+
"step_2_deciding_what_is_relevant": {
|
|
64
|
+
"description": "Systematic search and selection of qualitative studies",
|
|
65
|
+
"output": "Final set of included studies",
|
|
66
|
+
"note": "Use PRISMA flow diagram to document selection"
|
|
67
|
+
},
|
|
68
|
+
"step_3_reading_the_studies": {
|
|
69
|
+
"description": (
|
|
70
|
+
"Read and re-read included studies carefully. "
|
|
71
|
+
"Identify key metaphors, themes, and concepts in each."
|
|
72
|
+
),
|
|
73
|
+
"output": "List of first-order (participant quotes) and "
|
|
74
|
+
"second-order (author interpretations) constructs"
|
|
75
|
+
},
|
|
76
|
+
"step_4_determining_how_studies_are_related": {
|
|
77
|
+
"description": (
|
|
78
|
+
"Create a grid mapping constructs across studies. "
|
|
79
|
+
"Determine if studies are reciprocal (about similar things), "
|
|
80
|
+
"refutational (contradictory), or form a line of argument."
|
|
81
|
+
),
|
|
82
|
+
"output": "Construct comparison table"
|
|
83
|
+
},
|
|
84
|
+
"step_5_translating_studies": {
|
|
85
|
+
"description": (
|
|
86
|
+
"Translate the concepts of one study into the terms of another. "
|
|
87
|
+
"This is the core analytical step -- finding common meaning "
|
|
88
|
+
"expressed in different language."
|
|
89
|
+
),
|
|
90
|
+
"output": "Translated constructs across all studies"
|
|
91
|
+
},
|
|
92
|
+
"step_6_synthesizing_translations": {
|
|
93
|
+
"description": (
|
|
94
|
+
"Develop third-order constructs -- new interpretations "
|
|
95
|
+
"that go beyond what any single study found."
|
|
96
|
+
),
|
|
97
|
+
"output": "Third-order constructs (the synthesis)"
|
|
98
|
+
},
|
|
99
|
+
"step_7_expressing_the_synthesis": {
|
|
100
|
+
"description": "Write up the synthesis in a form accessible to the audience",
|
|
101
|
+
"output": "Published meta-synthesis paper"
|
|
102
|
+
}
|
|
103
|
+
}
|
|
104
|
+
```
|
|
105
|
+
|
|
106
|
+
### Types of Synthesis
|
|
107
|
+
|
|
108
|
+
```
|
|
109
|
+
Reciprocal translation:
|
|
110
|
+
Studies are about similar things. Translate them into each other.
|
|
111
|
+
"Study A calls it 'navigating uncertainty'; Study B calls it
|
|
112
|
+
'managing ambiguity'; Study C calls it 'living with not knowing'.
|
|
113
|
+
The overarching construct is 'Tolerating the Unknown.'"
|
|
114
|
+
|
|
115
|
+
Refutational synthesis:
|
|
116
|
+
Studies contradict each other. Explore why.
|
|
117
|
+
"Study A found empowerment through peer support; Study B found
|
|
118
|
+
peer support increased anxiety. This refutation may be explained
|
|
119
|
+
by the stage of illness at which support was received."
|
|
120
|
+
|
|
121
|
+
Line of argument synthesis:
|
|
122
|
+
Studies address different aspects that together form a whole.
|
|
123
|
+
"Study A covers diagnosis, B covers treatment, C covers recovery.
|
|
124
|
+
Together they reveal a trajectory of 'Reconstructing Identity.'"
|
|
125
|
+
```
|
|
126
|
+
|
|
127
|
+
## Thematic Synthesis (Thomas & Harden)
|
|
128
|
+
|
|
129
|
+
### Three-Stage Approach
|
|
130
|
+
|
|
131
|
+
```
|
|
132
|
+
Stage 1: Free coding of findings
|
|
133
|
+
- Treat the findings sections of included studies as data
|
|
134
|
+
- Code them line by line, as in primary qualitative analysis
|
|
135
|
+
|
|
136
|
+
Stage 2: Organizing codes into descriptive themes
|
|
137
|
+
- Group codes into descriptive themes
|
|
138
|
+
- These are "close to" the original studies
|
|
139
|
+
|
|
140
|
+
Stage 3: Generating analytical themes
|
|
141
|
+
- Go beyond the content of the original studies
|
|
142
|
+
- Generate new interpretive constructs
|
|
143
|
+
- Answer the synthesis research question
|
|
144
|
+
```
|
|
145
|
+
|
|
146
|
+
## Quality Appraisal
|
|
147
|
+
|
|
148
|
+
### Assessing Qualitative Studies
|
|
149
|
+
|
|
150
|
+
```
|
|
151
|
+
Tools for appraising qualitative study quality:
|
|
152
|
+
|
|
153
|
+
CASP Qualitative Checklist (10 items):
|
|
154
|
+
- Was there a clear statement of aims?
|
|
155
|
+
- Is a qualitative methodology appropriate?
|
|
156
|
+
- Was the research design appropriate?
|
|
157
|
+
- Was the recruitment strategy appropriate?
|
|
158
|
+
- Was data collected in a way that addressed the research issue?
|
|
159
|
+
- Was the relationship between researcher and participants considered?
|
|
160
|
+
- Were ethical issues considered?
|
|
161
|
+
- Was data analysis sufficiently rigorous?
|
|
162
|
+
- Was there a clear statement of findings?
|
|
163
|
+
- How valuable is the research?
|
|
164
|
+
|
|
165
|
+
JBI Checklist for Qualitative Research (10 criteria)
|
|
166
|
+
|
|
167
|
+
Decision: Include all studies or exclude low-quality studies?
|
|
168
|
+
- Sensitivity analysis: Run the synthesis with and without
|
|
169
|
+
lower-quality studies to see if conclusions change.
|
|
170
|
+
```
|
|
171
|
+
|
|
172
|
+
## Reporting Standards
|
|
173
|
+
|
|
174
|
+
Use the ENTREQ (Enhancing Transparency in Reporting the Synthesis of Qualitative Research) statement. Report: the synthesis methodology used, the search strategy and selection criteria, quality appraisal results, a table of included studies with their key constructs, the synthesis process with clear evidence trails, and how third-order constructs were derived from the primary studies.
|