@synsci/cli-darwin-x64 1.1.97 → 1.1.99

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (1549) hide show
  1. package/bin/synsc +0 -0
  2. package/package.json +1 -1
  3. package/bin/skills/accelerate/SKILL.md +0 -332
  4. package/bin/skills/accelerate/references/custom-plugins.md +0 -453
  5. package/bin/skills/accelerate/references/megatron-integration.md +0 -489
  6. package/bin/skills/accelerate/references/performance.md +0 -525
  7. package/bin/skills/adaptyv/SKILL.md +0 -114
  8. package/bin/skills/adaptyv/reference/api_reference.md +0 -308
  9. package/bin/skills/adaptyv/reference/examples.md +0 -913
  10. package/bin/skills/adaptyv/reference/experiments.md +0 -360
  11. package/bin/skills/adaptyv/reference/protein_optimization.md +0 -637
  12. package/bin/skills/aeon/SKILL.md +0 -374
  13. package/bin/skills/aeon/references/anomaly_detection.md +0 -154
  14. package/bin/skills/aeon/references/classification.md +0 -144
  15. package/bin/skills/aeon/references/clustering.md +0 -123
  16. package/bin/skills/aeon/references/datasets_benchmarking.md +0 -387
  17. package/bin/skills/aeon/references/distances.md +0 -256
  18. package/bin/skills/aeon/references/forecasting.md +0 -140
  19. package/bin/skills/aeon/references/networks.md +0 -289
  20. package/bin/skills/aeon/references/regression.md +0 -118
  21. package/bin/skills/aeon/references/segmentation.md +0 -163
  22. package/bin/skills/aeon/references/similarity_search.md +0 -187
  23. package/bin/skills/aeon/references/transformations.md +0 -246
  24. package/bin/skills/alphafold-database/SKILL.md +0 -513
  25. package/bin/skills/alphafold-database/references/api_reference.md +0 -423
  26. package/bin/skills/anndata/SKILL.md +0 -400
  27. package/bin/skills/anndata/references/best_practices.md +0 -525
  28. package/bin/skills/anndata/references/concatenation.md +0 -396
  29. package/bin/skills/anndata/references/data_structure.md +0 -314
  30. package/bin/skills/anndata/references/io_operations.md +0 -404
  31. package/bin/skills/anndata/references/manipulation.md +0 -516
  32. package/bin/skills/arboreto/SKILL.md +0 -243
  33. package/bin/skills/arboreto/references/algorithms.md +0 -138
  34. package/bin/skills/arboreto/references/basic_inference.md +0 -151
  35. package/bin/skills/arboreto/references/distributed_computing.md +0 -242
  36. package/bin/skills/arboreto/scripts/basic_grn_inference.py +0 -97
  37. package/bin/skills/astropy/SKILL.md +0 -331
  38. package/bin/skills/astropy/references/coordinates.md +0 -273
  39. package/bin/skills/astropy/references/cosmology.md +0 -307
  40. package/bin/skills/astropy/references/fits.md +0 -396
  41. package/bin/skills/astropy/references/tables.md +0 -489
  42. package/bin/skills/astropy/references/time.md +0 -404
  43. package/bin/skills/astropy/references/units.md +0 -178
  44. package/bin/skills/astropy/references/wcs_and_other_modules.md +0 -373
  45. package/bin/skills/audiocraft/SKILL.md +0 -564
  46. package/bin/skills/audiocraft/references/advanced-usage.md +0 -666
  47. package/bin/skills/audiocraft/references/troubleshooting.md +0 -504
  48. package/bin/skills/autogpt/SKILL.md +0 -403
  49. package/bin/skills/autogpt/references/advanced-usage.md +0 -535
  50. package/bin/skills/autogpt/references/troubleshooting.md +0 -420
  51. package/bin/skills/awq/SKILL.md +0 -310
  52. package/bin/skills/awq/references/advanced-usage.md +0 -324
  53. package/bin/skills/awq/references/troubleshooting.md +0 -344
  54. package/bin/skills/axolotl/SKILL.md +0 -158
  55. package/bin/skills/axolotl/references/api.md +0 -5548
  56. package/bin/skills/axolotl/references/dataset-formats.md +0 -1029
  57. package/bin/skills/axolotl/references/index.md +0 -15
  58. package/bin/skills/axolotl/references/other.md +0 -3563
  59. package/bin/skills/benchling-integration/SKILL.md +0 -480
  60. package/bin/skills/benchling-integration/references/api_endpoints.md +0 -883
  61. package/bin/skills/benchling-integration/references/authentication.md +0 -379
  62. package/bin/skills/benchling-integration/references/sdk_reference.md +0 -774
  63. package/bin/skills/bigcode-evaluation-harness/SKILL.md +0 -405
  64. package/bin/skills/bigcode-evaluation-harness/references/benchmarks.md +0 -393
  65. package/bin/skills/bigcode-evaluation-harness/references/custom-tasks.md +0 -424
  66. package/bin/skills/bigcode-evaluation-harness/references/issues.md +0 -394
  67. package/bin/skills/biopython/SKILL.md +0 -443
  68. package/bin/skills/biopython/references/advanced.md +0 -577
  69. package/bin/skills/biopython/references/alignment.md +0 -362
  70. package/bin/skills/biopython/references/blast.md +0 -455
  71. package/bin/skills/biopython/references/databases.md +0 -484
  72. package/bin/skills/biopython/references/phylogenetics.md +0 -566
  73. package/bin/skills/biopython/references/sequence_io.md +0 -285
  74. package/bin/skills/biopython/references/structure.md +0 -564
  75. package/bin/skills/biorxiv-database/SKILL.md +0 -483
  76. package/bin/skills/biorxiv-database/references/api_reference.md +0 -280
  77. package/bin/skills/biorxiv-database/scripts/biorxiv_search.py +0 -445
  78. package/bin/skills/bioservices/SKILL.md +0 -361
  79. package/bin/skills/bioservices/references/identifier_mapping.md +0 -685
  80. package/bin/skills/bioservices/references/services_reference.md +0 -636
  81. package/bin/skills/bioservices/references/workflow_patterns.md +0 -811
  82. package/bin/skills/bioservices/scripts/batch_id_converter.py +0 -347
  83. package/bin/skills/bioservices/scripts/compound_cross_reference.py +0 -378
  84. package/bin/skills/bioservices/scripts/pathway_analysis.py +0 -309
  85. package/bin/skills/bioservices/scripts/protein_analysis_workflow.py +0 -408
  86. package/bin/skills/bitsandbytes/SKILL.md +0 -411
  87. package/bin/skills/bitsandbytes/references/memory-optimization.md +0 -521
  88. package/bin/skills/bitsandbytes/references/qlora-training.md +0 -521
  89. package/bin/skills/bitsandbytes/references/quantization-formats.md +0 -447
  90. package/bin/skills/blip-2/SKILL.md +0 -564
  91. package/bin/skills/blip-2/references/advanced-usage.md +0 -680
  92. package/bin/skills/blip-2/references/troubleshooting.md +0 -526
  93. package/bin/skills/brenda-database/SKILL.md +0 -719
  94. package/bin/skills/brenda-database/references/api_reference.md +0 -537
  95. package/bin/skills/brenda-database/scripts/brenda_queries.py +0 -844
  96. package/bin/skills/brenda-database/scripts/brenda_visualization.py +0 -772
  97. package/bin/skills/brenda-database/scripts/enzyme_pathway_builder.py +0 -1053
  98. package/bin/skills/cellxgene-census/SKILL.md +0 -511
  99. package/bin/skills/cellxgene-census/references/census_schema.md +0 -182
  100. package/bin/skills/cellxgene-census/references/common_patterns.md +0 -351
  101. package/bin/skills/chembl-database/SKILL.md +0 -389
  102. package/bin/skills/chembl-database/references/api_reference.md +0 -272
  103. package/bin/skills/chembl-database/scripts/example_queries.py +0 -278
  104. package/bin/skills/chroma/SKILL.md +0 -406
  105. package/bin/skills/chroma/references/integration.md +0 -38
  106. package/bin/skills/cirq/SKILL.md +0 -346
  107. package/bin/skills/cirq/references/building.md +0 -307
  108. package/bin/skills/cirq/references/experiments.md +0 -572
  109. package/bin/skills/cirq/references/hardware.md +0 -515
  110. package/bin/skills/cirq/references/noise.md +0 -515
  111. package/bin/skills/cirq/references/simulation.md +0 -350
  112. package/bin/skills/cirq/references/transformation.md +0 -416
  113. package/bin/skills/citation-management/SKILL.md +0 -1109
  114. package/bin/skills/citation-management/assets/bibtex_template.bib +0 -264
  115. package/bin/skills/citation-management/assets/citation_checklist.md +0 -386
  116. package/bin/skills/citation-management/references/bibtex_formatting.md +0 -908
  117. package/bin/skills/citation-management/references/citation_validation.md +0 -794
  118. package/bin/skills/citation-management/references/google_scholar_search.md +0 -725
  119. package/bin/skills/citation-management/references/metadata_extraction.md +0 -870
  120. package/bin/skills/citation-management/references/pubmed_search.md +0 -839
  121. package/bin/skills/citation-management/scripts/doi_to_bibtex.py +0 -182
  122. package/bin/skills/citation-management/scripts/extract_metadata.py +0 -570
  123. package/bin/skills/citation-management/scripts/format_bibtex.py +0 -349
  124. package/bin/skills/citation-management/scripts/search_google_scholar.py +0 -251
  125. package/bin/skills/citation-management/scripts/search_pubmed.py +0 -348
  126. package/bin/skills/citation-management/scripts/validate_citations.py +0 -494
  127. package/bin/skills/clinical-decision-support/README.md +0 -129
  128. package/bin/skills/clinical-decision-support/SKILL.md +0 -506
  129. package/bin/skills/clinical-decision-support/assets/biomarker_report_template.tex +0 -380
  130. package/bin/skills/clinical-decision-support/assets/clinical_pathway_template.tex +0 -222
  131. package/bin/skills/clinical-decision-support/assets/cohort_analysis_template.tex +0 -359
  132. package/bin/skills/clinical-decision-support/assets/color_schemes.tex +0 -149
  133. package/bin/skills/clinical-decision-support/assets/example_gbm_cohort.md +0 -208
  134. package/bin/skills/clinical-decision-support/assets/recommendation_strength_guide.md +0 -328
  135. package/bin/skills/clinical-decision-support/assets/treatment_recommendation_template.tex +0 -529
  136. package/bin/skills/clinical-decision-support/references/biomarker_classification.md +0 -719
  137. package/bin/skills/clinical-decision-support/references/clinical_decision_algorithms.md +0 -604
  138. package/bin/skills/clinical-decision-support/references/evidence_synthesis.md +0 -840
  139. package/bin/skills/clinical-decision-support/references/outcome_analysis.md +0 -640
  140. package/bin/skills/clinical-decision-support/references/patient_cohort_analysis.md +0 -427
  141. package/bin/skills/clinical-decision-support/references/treatment_recommendations.md +0 -521
  142. package/bin/skills/clinical-decision-support/scripts/biomarker_classifier.py +0 -383
  143. package/bin/skills/clinical-decision-support/scripts/build_decision_tree.py +0 -417
  144. package/bin/skills/clinical-decision-support/scripts/create_cohort_tables.py +0 -509
  145. package/bin/skills/clinical-decision-support/scripts/generate_survival_analysis.py +0 -441
  146. package/bin/skills/clinical-decision-support/scripts/validate_cds_document.py +0 -326
  147. package/bin/skills/clinical-reports/IMPLEMENTATION_SUMMARY.md +0 -641
  148. package/bin/skills/clinical-reports/README.md +0 -236
  149. package/bin/skills/clinical-reports/SKILL.md +0 -1127
  150. package/bin/skills/clinical-reports/assets/case_report_template.md +0 -352
  151. package/bin/skills/clinical-reports/assets/clinical_trial_csr_template.md +0 -353
  152. package/bin/skills/clinical-reports/assets/clinical_trial_sae_template.md +0 -359
  153. package/bin/skills/clinical-reports/assets/consult_note_template.md +0 -305
  154. package/bin/skills/clinical-reports/assets/discharge_summary_template.md +0 -453
  155. package/bin/skills/clinical-reports/assets/hipaa_compliance_checklist.md +0 -395
  156. package/bin/skills/clinical-reports/assets/history_physical_template.md +0 -305
  157. package/bin/skills/clinical-reports/assets/lab_report_template.md +0 -309
  158. package/bin/skills/clinical-reports/assets/pathology_report_template.md +0 -249
  159. package/bin/skills/clinical-reports/assets/quality_checklist.md +0 -338
  160. package/bin/skills/clinical-reports/assets/radiology_report_template.md +0 -318
  161. package/bin/skills/clinical-reports/assets/soap_note_template.md +0 -253
  162. package/bin/skills/clinical-reports/references/case_report_guidelines.md +0 -570
  163. package/bin/skills/clinical-reports/references/clinical_trial_reporting.md +0 -693
  164. package/bin/skills/clinical-reports/references/data_presentation.md +0 -530
  165. package/bin/skills/clinical-reports/references/diagnostic_reports_standards.md +0 -629
  166. package/bin/skills/clinical-reports/references/medical_terminology.md +0 -588
  167. package/bin/skills/clinical-reports/references/patient_documentation.md +0 -744
  168. package/bin/skills/clinical-reports/references/peer_review_standards.md +0 -585
  169. package/bin/skills/clinical-reports/references/regulatory_compliance.md +0 -577
  170. package/bin/skills/clinical-reports/scripts/check_deidentification.py +0 -332
  171. package/bin/skills/clinical-reports/scripts/compliance_checker.py +0 -78
  172. package/bin/skills/clinical-reports/scripts/extract_clinical_data.py +0 -97
  173. package/bin/skills/clinical-reports/scripts/format_adverse_events.py +0 -97
  174. package/bin/skills/clinical-reports/scripts/generate_report_template.py +0 -149
  175. package/bin/skills/clinical-reports/scripts/terminology_validator.py +0 -126
  176. package/bin/skills/clinical-reports/scripts/validate_case_report.py +0 -323
  177. package/bin/skills/clinical-reports/scripts/validate_trial_report.py +0 -88
  178. package/bin/skills/clinicaltrials-database/SKILL.md +0 -507
  179. package/bin/skills/clinicaltrials-database/references/api_reference.md +0 -358
  180. package/bin/skills/clinicaltrials-database/scripts/query_clinicaltrials.py +0 -215
  181. package/bin/skills/clinpgx-database/SKILL.md +0 -638
  182. package/bin/skills/clinpgx-database/references/api_reference.md +0 -757
  183. package/bin/skills/clinpgx-database/scripts/query_clinpgx.py +0 -518
  184. package/bin/skills/clinvar-database/SKILL.md +0 -362
  185. package/bin/skills/clinvar-database/references/api_reference.md +0 -227
  186. package/bin/skills/clinvar-database/references/clinical_significance.md +0 -218
  187. package/bin/skills/clinvar-database/references/data_formats.md +0 -358
  188. package/bin/skills/clip/SKILL.md +0 -253
  189. package/bin/skills/clip/references/applications.md +0 -207
  190. package/bin/skills/cobrapy/SKILL.md +0 -463
  191. package/bin/skills/cobrapy/references/api_quick_reference.md +0 -655
  192. package/bin/skills/cobrapy/references/workflows.md +0 -593
  193. package/bin/skills/colab-finetuning/SKILL.md +0 -153
  194. package/bin/skills/colab-finetuning/references/bridge-setup.md +0 -68
  195. package/bin/skills/colab-finetuning/references/gpu-tiers.md +0 -54
  196. package/bin/skills/colab-finetuning/references/troubleshooting.md +0 -79
  197. package/bin/skills/constitutional-ai/SKILL.md +0 -290
  198. package/bin/skills/cosmic-database/SKILL.md +0 -336
  199. package/bin/skills/cosmic-database/references/cosmic_data_reference.md +0 -220
  200. package/bin/skills/cosmic-database/scripts/download_cosmic.py +0 -231
  201. package/bin/skills/crewai/SKILL.md +0 -498
  202. package/bin/skills/crewai/references/flows.md +0 -438
  203. package/bin/skills/crewai/references/tools.md +0 -429
  204. package/bin/skills/crewai/references/troubleshooting.md +0 -480
  205. package/bin/skills/dask/SKILL.md +0 -456
  206. package/bin/skills/dask/references/arrays.md +0 -497
  207. package/bin/skills/dask/references/bags.md +0 -468
  208. package/bin/skills/dask/references/best-practices.md +0 -277
  209. package/bin/skills/dask/references/dataframes.md +0 -368
  210. package/bin/skills/dask/references/futures.md +0 -541
  211. package/bin/skills/dask/references/schedulers.md +0 -504
  212. package/bin/skills/datacommons-client/SKILL.md +0 -255
  213. package/bin/skills/datacommons-client/references/getting_started.md +0 -417
  214. package/bin/skills/datacommons-client/references/node.md +0 -250
  215. package/bin/skills/datacommons-client/references/observation.md +0 -185
  216. package/bin/skills/datacommons-client/references/resolve.md +0 -246
  217. package/bin/skills/datamol/SKILL.md +0 -706
  218. package/bin/skills/datamol/references/conformers_module.md +0 -131
  219. package/bin/skills/datamol/references/core_api.md +0 -130
  220. package/bin/skills/datamol/references/descriptors_viz.md +0 -195
  221. package/bin/skills/datamol/references/fragments_scaffolds.md +0 -174
  222. package/bin/skills/datamol/references/io_module.md +0 -109
  223. package/bin/skills/datamol/references/reactions_data.md +0 -218
  224. package/bin/skills/deepchem/SKILL.md +0 -597
  225. package/bin/skills/deepchem/references/api_reference.md +0 -303
  226. package/bin/skills/deepchem/references/workflows.md +0 -491
  227. package/bin/skills/deepchem/scripts/graph_neural_network.py +0 -338
  228. package/bin/skills/deepchem/scripts/predict_solubility.py +0 -224
  229. package/bin/skills/deepchem/scripts/transfer_learning.py +0 -375
  230. package/bin/skills/deepspeed/SKILL.md +0 -141
  231. package/bin/skills/deepspeed/references/08.md +0 -17
  232. package/bin/skills/deepspeed/references/09.md +0 -173
  233. package/bin/skills/deepspeed/references/2020.md +0 -378
  234. package/bin/skills/deepspeed/references/2023.md +0 -279
  235. package/bin/skills/deepspeed/references/assets.md +0 -179
  236. package/bin/skills/deepspeed/references/index.md +0 -35
  237. package/bin/skills/deepspeed/references/mii.md +0 -118
  238. package/bin/skills/deepspeed/references/other.md +0 -1191
  239. package/bin/skills/deepspeed/references/tutorials.md +0 -6554
  240. package/bin/skills/deeptools/SKILL.md +0 -531
  241. package/bin/skills/deeptools/assets/quick_reference.md +0 -58
  242. package/bin/skills/deeptools/references/effective_genome_sizes.md +0 -116
  243. package/bin/skills/deeptools/references/normalization_methods.md +0 -410
  244. package/bin/skills/deeptools/references/tools_reference.md +0 -533
  245. package/bin/skills/deeptools/references/workflows.md +0 -474
  246. package/bin/skills/deeptools/scripts/validate_files.py +0 -195
  247. package/bin/skills/deeptools/scripts/workflow_generator.py +0 -454
  248. package/bin/skills/denario/SKILL.md +0 -215
  249. package/bin/skills/denario/references/examples.md +0 -494
  250. package/bin/skills/denario/references/installation.md +0 -213
  251. package/bin/skills/denario/references/llm_configuration.md +0 -265
  252. package/bin/skills/denario/references/research_pipeline.md +0 -471
  253. package/bin/skills/diffdock/SKILL.md +0 -483
  254. package/bin/skills/diffdock/assets/batch_template.csv +0 -4
  255. package/bin/skills/diffdock/assets/custom_inference_config.yaml +0 -90
  256. package/bin/skills/diffdock/references/confidence_and_limitations.md +0 -182
  257. package/bin/skills/diffdock/references/parameters_reference.md +0 -163
  258. package/bin/skills/diffdock/references/workflows_examples.md +0 -392
  259. package/bin/skills/diffdock/scripts/analyze_results.py +0 -334
  260. package/bin/skills/diffdock/scripts/prepare_batch_csv.py +0 -254
  261. package/bin/skills/diffdock/scripts/setup_check.py +0 -278
  262. package/bin/skills/dnanexus-integration/SKILL.md +0 -383
  263. package/bin/skills/dnanexus-integration/references/app-development.md +0 -247
  264. package/bin/skills/dnanexus-integration/references/configuration.md +0 -646
  265. package/bin/skills/dnanexus-integration/references/data-operations.md +0 -400
  266. package/bin/skills/dnanexus-integration/references/job-execution.md +0 -412
  267. package/bin/skills/dnanexus-integration/references/python-sdk.md +0 -523
  268. package/bin/skills/document-skills/docx/LICENSE.txt +0 -30
  269. package/bin/skills/document-skills/docx/SKILL.md +0 -233
  270. package/bin/skills/document-skills/docx/docx-js.md +0 -350
  271. package/bin/skills/document-skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/dml-chart.xsd +0 -1499
  272. package/bin/skills/document-skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/dml-chartDrawing.xsd +0 -146
  273. package/bin/skills/document-skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/dml-diagram.xsd +0 -1085
  274. package/bin/skills/document-skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/dml-lockedCanvas.xsd +0 -11
  275. package/bin/skills/document-skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/dml-main.xsd +0 -3081
  276. package/bin/skills/document-skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/dml-picture.xsd +0 -23
  277. package/bin/skills/document-skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/dml-spreadsheetDrawing.xsd +0 -185
  278. package/bin/skills/document-skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/dml-wordprocessingDrawing.xsd +0 -287
  279. package/bin/skills/document-skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/pml.xsd +0 -1676
  280. package/bin/skills/document-skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/shared-additionalCharacteristics.xsd +0 -28
  281. package/bin/skills/document-skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/shared-bibliography.xsd +0 -144
  282. package/bin/skills/document-skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/shared-commonSimpleTypes.xsd +0 -174
  283. package/bin/skills/document-skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/shared-customXmlDataProperties.xsd +0 -25
  284. package/bin/skills/document-skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/shared-customXmlSchemaProperties.xsd +0 -18
  285. package/bin/skills/document-skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/shared-documentPropertiesCustom.xsd +0 -59
  286. package/bin/skills/document-skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/shared-documentPropertiesExtended.xsd +0 -56
  287. package/bin/skills/document-skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/shared-documentPropertiesVariantTypes.xsd +0 -195
  288. package/bin/skills/document-skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/shared-math.xsd +0 -582
  289. package/bin/skills/document-skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/shared-relationshipReference.xsd +0 -25
  290. package/bin/skills/document-skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/sml.xsd +0 -4439
  291. package/bin/skills/document-skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/vml-main.xsd +0 -570
  292. package/bin/skills/document-skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/vml-officeDrawing.xsd +0 -509
  293. package/bin/skills/document-skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/vml-presentationDrawing.xsd +0 -12
  294. package/bin/skills/document-skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/vml-spreadsheetDrawing.xsd +0 -108
  295. package/bin/skills/document-skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/vml-wordprocessingDrawing.xsd +0 -96
  296. package/bin/skills/document-skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/wml.xsd +0 -3646
  297. package/bin/skills/document-skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/xml.xsd +0 -116
  298. package/bin/skills/document-skills/docx/ooxml/schemas/ecma/fouth-edition/opc-contentTypes.xsd +0 -42
  299. package/bin/skills/document-skills/docx/ooxml/schemas/ecma/fouth-edition/opc-coreProperties.xsd +0 -50
  300. package/bin/skills/document-skills/docx/ooxml/schemas/ecma/fouth-edition/opc-digSig.xsd +0 -49
  301. package/bin/skills/document-skills/docx/ooxml/schemas/ecma/fouth-edition/opc-relationships.xsd +0 -33
  302. package/bin/skills/document-skills/docx/ooxml/schemas/mce/mc.xsd +0 -75
  303. package/bin/skills/document-skills/docx/ooxml/schemas/microsoft/wml-2010.xsd +0 -560
  304. package/bin/skills/document-skills/docx/ooxml/schemas/microsoft/wml-2012.xsd +0 -67
  305. package/bin/skills/document-skills/docx/ooxml/schemas/microsoft/wml-2018.xsd +0 -14
  306. package/bin/skills/document-skills/docx/ooxml/schemas/microsoft/wml-cex-2018.xsd +0 -20
  307. package/bin/skills/document-skills/docx/ooxml/schemas/microsoft/wml-cid-2016.xsd +0 -13
  308. package/bin/skills/document-skills/docx/ooxml/schemas/microsoft/wml-sdtdatahash-2020.xsd +0 -4
  309. package/bin/skills/document-skills/docx/ooxml/schemas/microsoft/wml-symex-2015.xsd +0 -8
  310. package/bin/skills/document-skills/docx/ooxml/scripts/pack.py +0 -159
  311. package/bin/skills/document-skills/docx/ooxml/scripts/unpack.py +0 -29
  312. package/bin/skills/document-skills/docx/ooxml/scripts/validate.py +0 -69
  313. package/bin/skills/document-skills/docx/ooxml/scripts/validation/__init__.py +0 -15
  314. package/bin/skills/document-skills/docx/ooxml/scripts/validation/base.py +0 -951
  315. package/bin/skills/document-skills/docx/ooxml/scripts/validation/docx.py +0 -274
  316. package/bin/skills/document-skills/docx/ooxml/scripts/validation/pptx.py +0 -315
  317. package/bin/skills/document-skills/docx/ooxml/scripts/validation/redlining.py +0 -279
  318. package/bin/skills/document-skills/docx/ooxml.md +0 -610
  319. package/bin/skills/document-skills/docx/scripts/__init__.py +0 -1
  320. package/bin/skills/document-skills/docx/scripts/document.py +0 -1276
  321. package/bin/skills/document-skills/docx/scripts/templates/comments.xml +0 -3
  322. package/bin/skills/document-skills/docx/scripts/templates/commentsExtended.xml +0 -3
  323. package/bin/skills/document-skills/docx/scripts/templates/commentsExtensible.xml +0 -3
  324. package/bin/skills/document-skills/docx/scripts/templates/commentsIds.xml +0 -3
  325. package/bin/skills/document-skills/docx/scripts/templates/people.xml +0 -3
  326. package/bin/skills/document-skills/docx/scripts/utilities.py +0 -374
  327. package/bin/skills/document-skills/pdf/LICENSE.txt +0 -30
  328. package/bin/skills/document-skills/pdf/SKILL.md +0 -330
  329. package/bin/skills/document-skills/pdf/forms.md +0 -205
  330. package/bin/skills/document-skills/pdf/reference.md +0 -612
  331. package/bin/skills/document-skills/pdf/scripts/check_bounding_boxes.py +0 -70
  332. package/bin/skills/document-skills/pdf/scripts/check_bounding_boxes_test.py +0 -226
  333. package/bin/skills/document-skills/pdf/scripts/check_fillable_fields.py +0 -12
  334. package/bin/skills/document-skills/pdf/scripts/convert_pdf_to_images.py +0 -35
  335. package/bin/skills/document-skills/pdf/scripts/create_validation_image.py +0 -41
  336. package/bin/skills/document-skills/pdf/scripts/extract_form_field_info.py +0 -152
  337. package/bin/skills/document-skills/pdf/scripts/fill_fillable_fields.py +0 -114
  338. package/bin/skills/document-skills/pdf/scripts/fill_pdf_form_with_annotations.py +0 -108
  339. package/bin/skills/document-skills/pptx/LICENSE.txt +0 -30
  340. package/bin/skills/document-skills/pptx/SKILL.md +0 -520
  341. package/bin/skills/document-skills/pptx/html2pptx.md +0 -625
  342. package/bin/skills/document-skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/dml-chart.xsd +0 -1499
  343. package/bin/skills/document-skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/dml-chartDrawing.xsd +0 -146
  344. package/bin/skills/document-skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/dml-diagram.xsd +0 -1085
  345. package/bin/skills/document-skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/dml-lockedCanvas.xsd +0 -11
  346. package/bin/skills/document-skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/dml-main.xsd +0 -3081
  347. package/bin/skills/document-skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/dml-picture.xsd +0 -23
  348. package/bin/skills/document-skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/dml-spreadsheetDrawing.xsd +0 -185
  349. package/bin/skills/document-skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/dml-wordprocessingDrawing.xsd +0 -287
  350. package/bin/skills/document-skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/pml.xsd +0 -1676
  351. package/bin/skills/document-skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/shared-additionalCharacteristics.xsd +0 -28
  352. package/bin/skills/document-skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/shared-bibliography.xsd +0 -144
  353. package/bin/skills/document-skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/shared-commonSimpleTypes.xsd +0 -174
  354. package/bin/skills/document-skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/shared-customXmlDataProperties.xsd +0 -25
  355. package/bin/skills/document-skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/shared-customXmlSchemaProperties.xsd +0 -18
  356. package/bin/skills/document-skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/shared-documentPropertiesCustom.xsd +0 -59
  357. package/bin/skills/document-skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/shared-documentPropertiesExtended.xsd +0 -56
  358. package/bin/skills/document-skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/shared-documentPropertiesVariantTypes.xsd +0 -195
  359. package/bin/skills/document-skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/shared-math.xsd +0 -582
  360. package/bin/skills/document-skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/shared-relationshipReference.xsd +0 -25
  361. package/bin/skills/document-skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/sml.xsd +0 -4439
  362. package/bin/skills/document-skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/vml-main.xsd +0 -570
  363. package/bin/skills/document-skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/vml-officeDrawing.xsd +0 -509
  364. package/bin/skills/document-skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/vml-presentationDrawing.xsd +0 -12
  365. package/bin/skills/document-skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/vml-spreadsheetDrawing.xsd +0 -108
  366. package/bin/skills/document-skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/vml-wordprocessingDrawing.xsd +0 -96
  367. package/bin/skills/document-skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/wml.xsd +0 -3646
  368. package/bin/skills/document-skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/xml.xsd +0 -116
  369. package/bin/skills/document-skills/pptx/ooxml/schemas/ecma/fouth-edition/opc-contentTypes.xsd +0 -42
  370. package/bin/skills/document-skills/pptx/ooxml/schemas/ecma/fouth-edition/opc-coreProperties.xsd +0 -50
  371. package/bin/skills/document-skills/pptx/ooxml/schemas/ecma/fouth-edition/opc-digSig.xsd +0 -49
  372. package/bin/skills/document-skills/pptx/ooxml/schemas/ecma/fouth-edition/opc-relationships.xsd +0 -33
  373. package/bin/skills/document-skills/pptx/ooxml/schemas/mce/mc.xsd +0 -75
  374. package/bin/skills/document-skills/pptx/ooxml/schemas/microsoft/wml-2010.xsd +0 -560
  375. package/bin/skills/document-skills/pptx/ooxml/schemas/microsoft/wml-2012.xsd +0 -67
  376. package/bin/skills/document-skills/pptx/ooxml/schemas/microsoft/wml-2018.xsd +0 -14
  377. package/bin/skills/document-skills/pptx/ooxml/schemas/microsoft/wml-cex-2018.xsd +0 -20
  378. package/bin/skills/document-skills/pptx/ooxml/schemas/microsoft/wml-cid-2016.xsd +0 -13
  379. package/bin/skills/document-skills/pptx/ooxml/schemas/microsoft/wml-sdtdatahash-2020.xsd +0 -4
  380. package/bin/skills/document-skills/pptx/ooxml/schemas/microsoft/wml-symex-2015.xsd +0 -8
  381. package/bin/skills/document-skills/pptx/ooxml/scripts/pack.py +0 -159
  382. package/bin/skills/document-skills/pptx/ooxml/scripts/unpack.py +0 -29
  383. package/bin/skills/document-skills/pptx/ooxml/scripts/validate.py +0 -69
  384. package/bin/skills/document-skills/pptx/ooxml/scripts/validation/__init__.py +0 -15
  385. package/bin/skills/document-skills/pptx/ooxml/scripts/validation/base.py +0 -951
  386. package/bin/skills/document-skills/pptx/ooxml/scripts/validation/docx.py +0 -274
  387. package/bin/skills/document-skills/pptx/ooxml/scripts/validation/pptx.py +0 -315
  388. package/bin/skills/document-skills/pptx/ooxml/scripts/validation/redlining.py +0 -279
  389. package/bin/skills/document-skills/pptx/ooxml.md +0 -427
  390. package/bin/skills/document-skills/pptx/scripts/html2pptx.js +0 -979
  391. package/bin/skills/document-skills/pptx/scripts/inventory.py +0 -1020
  392. package/bin/skills/document-skills/pptx/scripts/rearrange.py +0 -231
  393. package/bin/skills/document-skills/pptx/scripts/replace.py +0 -385
  394. package/bin/skills/document-skills/pptx/scripts/thumbnail.py +0 -450
  395. package/bin/skills/document-skills/xlsx/LICENSE.txt +0 -30
  396. package/bin/skills/document-skills/xlsx/SKILL.md +0 -325
  397. package/bin/skills/document-skills/xlsx/recalc.py +0 -178
  398. package/bin/skills/drugbank-database/SKILL.md +0 -190
  399. package/bin/skills/drugbank-database/references/chemical-analysis.md +0 -590
  400. package/bin/skills/drugbank-database/references/data-access.md +0 -242
  401. package/bin/skills/drugbank-database/references/drug-queries.md +0 -386
  402. package/bin/skills/drugbank-database/references/interactions.md +0 -425
  403. package/bin/skills/drugbank-database/references/targets-pathways.md +0 -518
  404. package/bin/skills/drugbank-database/scripts/drugbank_helper.py +0 -350
  405. package/bin/skills/dspy/SKILL.md +0 -590
  406. package/bin/skills/dspy/references/examples.md +0 -663
  407. package/bin/skills/dspy/references/modules.md +0 -475
  408. package/bin/skills/dspy/references/optimizers.md +0 -566
  409. package/bin/skills/ena-database/SKILL.md +0 -204
  410. package/bin/skills/ena-database/references/api_reference.md +0 -490
  411. package/bin/skills/ensembl-database/SKILL.md +0 -311
  412. package/bin/skills/ensembl-database/references/api_endpoints.md +0 -346
  413. package/bin/skills/ensembl-database/scripts/ensembl_query.py +0 -427
  414. package/bin/skills/esm/SKILL.md +0 -306
  415. package/bin/skills/esm/references/esm-c-api.md +0 -583
  416. package/bin/skills/esm/references/esm3-api.md +0 -452
  417. package/bin/skills/esm/references/forge-api.md +0 -657
  418. package/bin/skills/esm/references/workflows.md +0 -685
  419. package/bin/skills/etetoolkit/SKILL.md +0 -623
  420. package/bin/skills/etetoolkit/references/api_reference.md +0 -583
  421. package/bin/skills/etetoolkit/references/visualization.md +0 -783
  422. package/bin/skills/etetoolkit/references/workflows.md +0 -774
  423. package/bin/skills/etetoolkit/scripts/quick_visualize.py +0 -214
  424. package/bin/skills/etetoolkit/scripts/tree_operations.py +0 -229
  425. package/bin/skills/exploratory-data-analysis/SKILL.md +0 -446
  426. package/bin/skills/exploratory-data-analysis/assets/report_template.md +0 -196
  427. package/bin/skills/exploratory-data-analysis/references/bioinformatics_genomics_formats.md +0 -664
  428. package/bin/skills/exploratory-data-analysis/references/chemistry_molecular_formats.md +0 -664
  429. package/bin/skills/exploratory-data-analysis/references/general_scientific_formats.md +0 -518
  430. package/bin/skills/exploratory-data-analysis/references/microscopy_imaging_formats.md +0 -620
  431. package/bin/skills/exploratory-data-analysis/references/proteomics_metabolomics_formats.md +0 -517
  432. package/bin/skills/exploratory-data-analysis/references/spectroscopy_analytical_formats.md +0 -633
  433. package/bin/skills/exploratory-data-analysis/scripts/eda_analyzer.py +0 -547
  434. package/bin/skills/faiss/SKILL.md +0 -221
  435. package/bin/skills/faiss/references/index_types.md +0 -280
  436. package/bin/skills/fda-database/SKILL.md +0 -518
  437. package/bin/skills/fda-database/references/animal_veterinary.md +0 -377
  438. package/bin/skills/fda-database/references/api_basics.md +0 -687
  439. package/bin/skills/fda-database/references/devices.md +0 -632
  440. package/bin/skills/fda-database/references/drugs.md +0 -468
  441. package/bin/skills/fda-database/references/foods.md +0 -374
  442. package/bin/skills/fda-database/references/other.md +0 -472
  443. package/bin/skills/fda-database/scripts/fda_examples.py +0 -335
  444. package/bin/skills/fda-database/scripts/fda_query.py +0 -440
  445. package/bin/skills/fireworks-ai/SKILL.md +0 -665
  446. package/bin/skills/flash-attention/SKILL.md +0 -367
  447. package/bin/skills/flash-attention/references/benchmarks.md +0 -215
  448. package/bin/skills/flash-attention/references/transformers-integration.md +0 -293
  449. package/bin/skills/flowio/SKILL.md +0 -608
  450. package/bin/skills/flowio/references/api_reference.md +0 -372
  451. package/bin/skills/fluidsim/SKILL.md +0 -349
  452. package/bin/skills/fluidsim/references/advanced_features.md +0 -398
  453. package/bin/skills/fluidsim/references/installation.md +0 -68
  454. package/bin/skills/fluidsim/references/output_analysis.md +0 -283
  455. package/bin/skills/fluidsim/references/parameters.md +0 -198
  456. package/bin/skills/fluidsim/references/simulation_workflow.md +0 -172
  457. package/bin/skills/fluidsim/references/solvers.md +0 -94
  458. package/bin/skills/fred-economic-data/SKILL.md +0 -433
  459. package/bin/skills/fred-economic-data/references/api_basics.md +0 -212
  460. package/bin/skills/fred-economic-data/references/categories.md +0 -442
  461. package/bin/skills/fred-economic-data/references/geofred.md +0 -588
  462. package/bin/skills/fred-economic-data/references/releases.md +0 -642
  463. package/bin/skills/fred-economic-data/references/series.md +0 -584
  464. package/bin/skills/fred-economic-data/references/sources.md +0 -423
  465. package/bin/skills/fred-economic-data/references/tags.md +0 -485
  466. package/bin/skills/fred-economic-data/scripts/fred_examples.py +0 -354
  467. package/bin/skills/fred-economic-data/scripts/fred_query.py +0 -590
  468. package/bin/skills/gene-database/SKILL.md +0 -179
  469. package/bin/skills/gene-database/references/api_reference.md +0 -404
  470. package/bin/skills/gene-database/references/common_workflows.md +0 -428
  471. package/bin/skills/gene-database/scripts/batch_gene_lookup.py +0 -298
  472. package/bin/skills/gene-database/scripts/fetch_gene_data.py +0 -277
  473. package/bin/skills/gene-database/scripts/query_gene.py +0 -251
  474. package/bin/skills/generate-image/SKILL.md +0 -178
  475. package/bin/skills/generate-image/scripts/generate_image.py +0 -254
  476. package/bin/skills/geniml/SKILL.md +0 -318
  477. package/bin/skills/geniml/references/bedspace.md +0 -127
  478. package/bin/skills/geniml/references/consensus_peaks.md +0 -238
  479. package/bin/skills/geniml/references/region2vec.md +0 -90
  480. package/bin/skills/geniml/references/scembed.md +0 -197
  481. package/bin/skills/geniml/references/utilities.md +0 -385
  482. package/bin/skills/geo-database/SKILL.md +0 -815
  483. package/bin/skills/geo-database/references/geo_reference.md +0 -829
  484. package/bin/skills/geopandas/SKILL.md +0 -251
  485. package/bin/skills/geopandas/references/crs-management.md +0 -243
  486. package/bin/skills/geopandas/references/data-io.md +0 -165
  487. package/bin/skills/geopandas/references/data-structures.md +0 -70
  488. package/bin/skills/geopandas/references/geometric-operations.md +0 -221
  489. package/bin/skills/geopandas/references/spatial-analysis.md +0 -184
  490. package/bin/skills/geopandas/references/visualization.md +0 -243
  491. package/bin/skills/get-available-resources/SKILL.md +0 -277
  492. package/bin/skills/get-available-resources/scripts/detect_resources.py +0 -401
  493. package/bin/skills/gget/SKILL.md +0 -871
  494. package/bin/skills/gget/references/database_info.md +0 -300
  495. package/bin/skills/gget/references/module_reference.md +0 -467
  496. package/bin/skills/gget/references/workflows.md +0 -814
  497. package/bin/skills/gget/scripts/batch_sequence_analysis.py +0 -191
  498. package/bin/skills/gget/scripts/enrichment_pipeline.py +0 -235
  499. package/bin/skills/gget/scripts/gene_analysis.py +0 -161
  500. package/bin/skills/gguf/SKILL.md +0 -427
  501. package/bin/skills/gguf/references/advanced-usage.md +0 -504
  502. package/bin/skills/gguf/references/troubleshooting.md +0 -442
  503. package/bin/skills/gptq/SKILL.md +0 -450
  504. package/bin/skills/gptq/references/calibration.md +0 -337
  505. package/bin/skills/gptq/references/integration.md +0 -129
  506. package/bin/skills/gptq/references/troubleshooting.md +0 -95
  507. package/bin/skills/groq/SKILL.md +0 -347
  508. package/bin/skills/grpo-rl-training/README.md +0 -97
  509. package/bin/skills/grpo-rl-training/SKILL.md +0 -572
  510. package/bin/skills/grpo-rl-training/examples/reward_functions_library.py +0 -393
  511. package/bin/skills/grpo-rl-training/templates/basic_grpo_training.py +0 -228
  512. package/bin/skills/gtars/SKILL.md +0 -285
  513. package/bin/skills/gtars/references/cli.md +0 -222
  514. package/bin/skills/gtars/references/coverage.md +0 -172
  515. package/bin/skills/gtars/references/overlap.md +0 -156
  516. package/bin/skills/gtars/references/python-api.md +0 -211
  517. package/bin/skills/gtars/references/refget.md +0 -147
  518. package/bin/skills/gtars/references/tokenizers.md +0 -103
  519. package/bin/skills/guidance/SKILL.md +0 -572
  520. package/bin/skills/guidance/references/backends.md +0 -554
  521. package/bin/skills/guidance/references/constraints.md +0 -674
  522. package/bin/skills/guidance/references/examples.md +0 -767
  523. package/bin/skills/gwas-database/SKILL.md +0 -608
  524. package/bin/skills/gwas-database/references/api_reference.md +0 -793
  525. package/bin/skills/histolab/SKILL.md +0 -678
  526. package/bin/skills/histolab/references/filters_preprocessing.md +0 -514
  527. package/bin/skills/histolab/references/slide_management.md +0 -172
  528. package/bin/skills/histolab/references/tile_extraction.md +0 -421
  529. package/bin/skills/histolab/references/tissue_masks.md +0 -251
  530. package/bin/skills/histolab/references/visualization.md +0 -547
  531. package/bin/skills/hmdb-database/SKILL.md +0 -196
  532. package/bin/skills/hmdb-database/references/hmdb_data_fields.md +0 -267
  533. package/bin/skills/hqq/SKILL.md +0 -445
  534. package/bin/skills/hqq/references/advanced-usage.md +0 -528
  535. package/bin/skills/hqq/references/troubleshooting.md +0 -503
  536. package/bin/skills/hugging-face-cli/SKILL.md +0 -191
  537. package/bin/skills/hugging-face-cli/references/commands.md +0 -954
  538. package/bin/skills/hugging-face-cli/references/examples.md +0 -374
  539. package/bin/skills/hugging-face-datasets/SKILL.md +0 -547
  540. package/bin/skills/hugging-face-datasets/examples/diverse_training_examples.json +0 -239
  541. package/bin/skills/hugging-face-datasets/examples/system_prompt_template.txt +0 -196
  542. package/bin/skills/hugging-face-datasets/examples/training_examples.json +0 -176
  543. package/bin/skills/hugging-face-datasets/scripts/dataset_manager.py +0 -522
  544. package/bin/skills/hugging-face-datasets/scripts/sql_manager.py +0 -844
  545. package/bin/skills/hugging-face-datasets/templates/chat.json +0 -55
  546. package/bin/skills/hugging-face-datasets/templates/classification.json +0 -62
  547. package/bin/skills/hugging-face-datasets/templates/completion.json +0 -51
  548. package/bin/skills/hugging-face-datasets/templates/custom.json +0 -75
  549. package/bin/skills/hugging-face-datasets/templates/qa.json +0 -54
  550. package/bin/skills/hugging-face-datasets/templates/tabular.json +0 -81
  551. package/bin/skills/hugging-face-evaluation/SKILL.md +0 -656
  552. package/bin/skills/hugging-face-evaluation/examples/.env.example +0 -7
  553. package/bin/skills/hugging-face-evaluation/examples/USAGE_EXAMPLES.md +0 -382
  554. package/bin/skills/hugging-face-evaluation/examples/artificial_analysis_to_hub.py +0 -141
  555. package/bin/skills/hugging-face-evaluation/examples/example_readme_tables.md +0 -135
  556. package/bin/skills/hugging-face-evaluation/examples/metric_mapping.json +0 -50
  557. package/bin/skills/hugging-face-evaluation/requirements.txt +0 -20
  558. package/bin/skills/hugging-face-evaluation/scripts/evaluation_manager.py +0 -1374
  559. package/bin/skills/hugging-face-evaluation/scripts/inspect_eval_uv.py +0 -104
  560. package/bin/skills/hugging-face-evaluation/scripts/inspect_vllm_uv.py +0 -317
  561. package/bin/skills/hugging-face-evaluation/scripts/lighteval_vllm_uv.py +0 -303
  562. package/bin/skills/hugging-face-evaluation/scripts/run_eval_job.py +0 -98
  563. package/bin/skills/hugging-face-evaluation/scripts/run_vllm_eval_job.py +0 -331
  564. package/bin/skills/hugging-face-evaluation/scripts/test_extraction.py +0 -206
  565. package/bin/skills/hugging-face-jobs/SKILL.md +0 -1040
  566. package/bin/skills/hugging-face-jobs/index.html +0 -216
  567. package/bin/skills/hugging-face-jobs/references/hardware_guide.md +0 -336
  568. package/bin/skills/hugging-face-jobs/references/hub_saving.md +0 -352
  569. package/bin/skills/hugging-face-jobs/references/token_usage.md +0 -546
  570. package/bin/skills/hugging-face-jobs/references/troubleshooting.md +0 -475
  571. package/bin/skills/hugging-face-jobs/scripts/cot-self-instruct.py +0 -718
  572. package/bin/skills/hugging-face-jobs/scripts/finepdfs-stats.py +0 -546
  573. package/bin/skills/hugging-face-jobs/scripts/generate-responses.py +0 -587
  574. package/bin/skills/hugging-face-model-trainer/SKILL.md +0 -710
  575. package/bin/skills/hugging-face-model-trainer/references/gguf_conversion.md +0 -296
  576. package/bin/skills/hugging-face-model-trainer/references/hardware_guide.md +0 -283
  577. package/bin/skills/hugging-face-model-trainer/references/hub_saving.md +0 -364
  578. package/bin/skills/hugging-face-model-trainer/references/reliability_principles.md +0 -371
  579. package/bin/skills/hugging-face-model-trainer/references/trackio_guide.md +0 -189
  580. package/bin/skills/hugging-face-model-trainer/references/training_methods.md +0 -150
  581. package/bin/skills/hugging-face-model-trainer/references/training_patterns.md +0 -203
  582. package/bin/skills/hugging-face-model-trainer/references/troubleshooting.md +0 -282
  583. package/bin/skills/hugging-face-model-trainer/scripts/convert_to_gguf.py +0 -424
  584. package/bin/skills/hugging-face-model-trainer/scripts/dataset_inspector.py +0 -417
  585. package/bin/skills/hugging-face-model-trainer/scripts/estimate_cost.py +0 -150
  586. package/bin/skills/hugging-face-model-trainer/scripts/train_dpo_example.py +0 -106
  587. package/bin/skills/hugging-face-model-trainer/scripts/train_grpo_example.py +0 -89
  588. package/bin/skills/hugging-face-model-trainer/scripts/train_sft_example.py +0 -122
  589. package/bin/skills/hugging-face-paper-publisher/SKILL.md +0 -627
  590. package/bin/skills/hugging-face-paper-publisher/examples/example_usage.md +0 -327
  591. package/bin/skills/hugging-face-paper-publisher/references/quick_reference.md +0 -216
  592. package/bin/skills/hugging-face-paper-publisher/scripts/paper_manager.py +0 -508
  593. package/bin/skills/hugging-face-paper-publisher/templates/arxiv.md +0 -299
  594. package/bin/skills/hugging-face-paper-publisher/templates/ml-report.md +0 -358
  595. package/bin/skills/hugging-face-paper-publisher/templates/modern.md +0 -319
  596. package/bin/skills/hugging-face-paper-publisher/templates/standard.md +0 -201
  597. package/bin/skills/hugging-face-tool-builder/SKILL.md +0 -115
  598. package/bin/skills/hugging-face-tool-builder/references/baseline_hf_api.py +0 -57
  599. package/bin/skills/hugging-face-tool-builder/references/baseline_hf_api.sh +0 -40
  600. package/bin/skills/hugging-face-tool-builder/references/baseline_hf_api.tsx +0 -57
  601. package/bin/skills/hugging-face-tool-builder/references/find_models_by_paper.sh +0 -230
  602. package/bin/skills/hugging-face-tool-builder/references/hf_enrich_models.sh +0 -96
  603. package/bin/skills/hugging-face-tool-builder/references/hf_model_card_frontmatter.sh +0 -188
  604. package/bin/skills/hugging-face-tool-builder/references/hf_model_papers_auth.sh +0 -171
  605. package/bin/skills/hugging-face-trackio/.claude-plugin/plugin.json +0 -19
  606. package/bin/skills/hugging-face-trackio/SKILL.md +0 -65
  607. package/bin/skills/hugging-face-trackio/references/logging_metrics.md +0 -206
  608. package/bin/skills/hugging-face-trackio/references/retrieving_metrics.md +0 -223
  609. package/bin/skills/huggingface-tokenizers/SKILL.md +0 -516
  610. package/bin/skills/huggingface-tokenizers/references/algorithms.md +0 -653
  611. package/bin/skills/huggingface-tokenizers/references/integration.md +0 -637
  612. package/bin/skills/huggingface-tokenizers/references/pipeline.md +0 -723
  613. package/bin/skills/huggingface-tokenizers/references/training.md +0 -565
  614. package/bin/skills/hypogenic/SKILL.md +0 -655
  615. package/bin/skills/hypogenic/references/config_template.yaml +0 -150
  616. package/bin/skills/hypothesis-generation/SKILL.md +0 -293
  617. package/bin/skills/hypothesis-generation/assets/FORMATTING_GUIDE.md +0 -672
  618. package/bin/skills/hypothesis-generation/assets/hypothesis_generation.sty +0 -307
  619. package/bin/skills/hypothesis-generation/assets/hypothesis_report_template.tex +0 -572
  620. package/bin/skills/hypothesis-generation/references/experimental_design_patterns.md +0 -329
  621. package/bin/skills/hypothesis-generation/references/hypothesis_quality_criteria.md +0 -198
  622. package/bin/skills/hypothesis-generation/references/literature_search_strategies.md +0 -622
  623. package/bin/skills/imaging-data-commons/SKILL.md +0 -1182
  624. package/bin/skills/imaging-data-commons/references/bigquery_guide.md +0 -556
  625. package/bin/skills/imaging-data-commons/references/cli_guide.md +0 -272
  626. package/bin/skills/imaging-data-commons/references/cloud_storage_guide.md +0 -333
  627. package/bin/skills/imaging-data-commons/references/dicomweb_guide.md +0 -399
  628. package/bin/skills/infographics/SKILL.md +0 -563
  629. package/bin/skills/infographics/references/color_palettes.md +0 -496
  630. package/bin/skills/infographics/references/design_principles.md +0 -636
  631. package/bin/skills/infographics/references/infographic_types.md +0 -907
  632. package/bin/skills/infographics/scripts/generate_infographic.py +0 -234
  633. package/bin/skills/infographics/scripts/generate_infographic_ai.py +0 -1290
  634. package/bin/skills/instructor/SKILL.md +0 -740
  635. package/bin/skills/instructor/references/examples.md +0 -107
  636. package/bin/skills/instructor/references/providers.md +0 -70
  637. package/bin/skills/instructor/references/validation.md +0 -606
  638. package/bin/skills/iso-13485-certification/SKILL.md +0 -680
  639. package/bin/skills/iso-13485-certification/assets/templates/procedures/CAPA-procedure-template.md +0 -453
  640. package/bin/skills/iso-13485-certification/assets/templates/procedures/document-control-procedure-template.md +0 -567
  641. package/bin/skills/iso-13485-certification/assets/templates/quality-manual-template.md +0 -521
  642. package/bin/skills/iso-13485-certification/references/gap-analysis-checklist.md +0 -568
  643. package/bin/skills/iso-13485-certification/references/iso-13485-requirements.md +0 -610
  644. package/bin/skills/iso-13485-certification/references/mandatory-documents.md +0 -606
  645. package/bin/skills/iso-13485-certification/references/quality-manual-guide.md +0 -688
  646. package/bin/skills/iso-13485-certification/scripts/gap_analyzer.py +0 -440
  647. package/bin/skills/kegg-database/SKILL.md +0 -377
  648. package/bin/skills/kegg-database/references/kegg_reference.md +0 -326
  649. package/bin/skills/kegg-database/scripts/kegg_api.py +0 -251
  650. package/bin/skills/knowledge-distillation/SKILL.md +0 -458
  651. package/bin/skills/knowledge-distillation/references/minillm.md +0 -334
  652. package/bin/skills/labarchive-integration/SKILL.md +0 -268
  653. package/bin/skills/labarchive-integration/references/api_reference.md +0 -342
  654. package/bin/skills/labarchive-integration/references/authentication_guide.md +0 -357
  655. package/bin/skills/labarchive-integration/references/integrations.md +0 -425
  656. package/bin/skills/labarchive-integration/scripts/entry_operations.py +0 -334
  657. package/bin/skills/labarchive-integration/scripts/notebook_operations.py +0 -269
  658. package/bin/skills/labarchive-integration/scripts/setup_config.py +0 -205
  659. package/bin/skills/lambda-labs/SKILL.md +0 -545
  660. package/bin/skills/lambda-labs/references/advanced-usage.md +0 -611
  661. package/bin/skills/lambda-labs/references/troubleshooting.md +0 -530
  662. package/bin/skills/lamindb/SKILL.md +0 -390
  663. package/bin/skills/lamindb/references/annotation-validation.md +0 -513
  664. package/bin/skills/lamindb/references/core-concepts.md +0 -380
  665. package/bin/skills/lamindb/references/data-management.md +0 -433
  666. package/bin/skills/lamindb/references/integrations.md +0 -642
  667. package/bin/skills/lamindb/references/ontologies.md +0 -497
  668. package/bin/skills/lamindb/references/setup-deployment.md +0 -733
  669. package/bin/skills/langchain/SKILL.md +0 -480
  670. package/bin/skills/langchain/references/agents.md +0 -499
  671. package/bin/skills/langchain/references/integration.md +0 -562
  672. package/bin/skills/langchain/references/rag.md +0 -600
  673. package/bin/skills/langsmith/SKILL.md +0 -422
  674. package/bin/skills/langsmith/references/advanced-usage.md +0 -548
  675. package/bin/skills/langsmith/references/troubleshooting.md +0 -537
  676. package/bin/skills/latchbio-integration/SKILL.md +0 -353
  677. package/bin/skills/latchbio-integration/references/data-management.md +0 -427
  678. package/bin/skills/latchbio-integration/references/resource-configuration.md +0 -429
  679. package/bin/skills/latchbio-integration/references/verified-workflows.md +0 -487
  680. package/bin/skills/latchbio-integration/references/workflow-creation.md +0 -254
  681. package/bin/skills/latex-posters/README.md +0 -417
  682. package/bin/skills/latex-posters/SKILL.md +0 -1602
  683. package/bin/skills/latex-posters/assets/baposter_template.tex +0 -257
  684. package/bin/skills/latex-posters/assets/beamerposter_template.tex +0 -244
  685. package/bin/skills/latex-posters/assets/poster_quality_checklist.md +0 -358
  686. package/bin/skills/latex-posters/assets/tikzposter_template.tex +0 -251
  687. package/bin/skills/latex-posters/references/latex_poster_packages.md +0 -745
  688. package/bin/skills/latex-posters/references/poster_content_guide.md +0 -748
  689. package/bin/skills/latex-posters/references/poster_design_principles.md +0 -806
  690. package/bin/skills/latex-posters/references/poster_layout_design.md +0 -900
  691. package/bin/skills/latex-posters/scripts/review_poster.sh +0 -214
  692. package/bin/skills/literature-review/SKILL.md +0 -641
  693. package/bin/skills/literature-review/assets/review_template.md +0 -412
  694. package/bin/skills/literature-review/references/citation_styles.md +0 -166
  695. package/bin/skills/literature-review/references/database_strategies.md +0 -455
  696. package/bin/skills/literature-review/scripts/generate_pdf.py +0 -184
  697. package/bin/skills/literature-review/scripts/search_databases.py +0 -310
  698. package/bin/skills/literature-review/scripts/verify_citations.py +0 -218
  699. package/bin/skills/litgpt/SKILL.md +0 -469
  700. package/bin/skills/litgpt/references/custom-models.md +0 -568
  701. package/bin/skills/litgpt/references/distributed-training.md +0 -451
  702. package/bin/skills/litgpt/references/supported-models.md +0 -336
  703. package/bin/skills/litgpt/references/training-recipes.md +0 -619
  704. package/bin/skills/llama-cpp/SKILL.md +0 -258
  705. package/bin/skills/llama-cpp/references/optimization.md +0 -89
  706. package/bin/skills/llama-cpp/references/quantization.md +0 -213
  707. package/bin/skills/llama-cpp/references/server.md +0 -125
  708. package/bin/skills/llama-factory/SKILL.md +0 -80
  709. package/bin/skills/llama-factory/references/_images.md +0 -23
  710. package/bin/skills/llama-factory/references/advanced.md +0 -1055
  711. package/bin/skills/llama-factory/references/getting_started.md +0 -349
  712. package/bin/skills/llama-factory/references/index.md +0 -19
  713. package/bin/skills/llama-factory/references/other.md +0 -31
  714. package/bin/skills/llamaguard/SKILL.md +0 -337
  715. package/bin/skills/llamaindex/SKILL.md +0 -569
  716. package/bin/skills/llamaindex/references/agents.md +0 -83
  717. package/bin/skills/llamaindex/references/data_connectors.md +0 -108
  718. package/bin/skills/llamaindex/references/query_engines.md +0 -406
  719. package/bin/skills/llava/SKILL.md +0 -304
  720. package/bin/skills/llava/references/training.md +0 -197
  721. package/bin/skills/llm-as-judge-evaluation/SKILL.md +0 -385
  722. package/bin/skills/llm-as-judge-evaluation/references/pairwise-comparison.md +0 -95
  723. package/bin/skills/llm-as-judge-evaluation/references/scoring-rubrics.md +0 -169
  724. package/bin/skills/lm-evaluation-harness/SKILL.md +0 -490
  725. package/bin/skills/lm-evaluation-harness/references/api-evaluation.md +0 -490
  726. package/bin/skills/lm-evaluation-harness/references/benchmark-guide.md +0 -488
  727. package/bin/skills/lm-evaluation-harness/references/custom-tasks.md +0 -602
  728. package/bin/skills/lm-evaluation-harness/references/distributed-eval.md +0 -519
  729. package/bin/skills/long-context/SKILL.md +0 -536
  730. package/bin/skills/long-context/references/extension_methods.md +0 -468
  731. package/bin/skills/long-context/references/fine_tuning.md +0 -611
  732. package/bin/skills/long-context/references/rope.md +0 -402
  733. package/bin/skills/mamba/SKILL.md +0 -260
  734. package/bin/skills/mamba/references/architecture-details.md +0 -206
  735. package/bin/skills/mamba/references/benchmarks.md +0 -255
  736. package/bin/skills/mamba/references/training-guide.md +0 -388
  737. package/bin/skills/market-research-reports/SKILL.md +0 -904
  738. package/bin/skills/market-research-reports/assets/FORMATTING_GUIDE.md +0 -428
  739. package/bin/skills/market-research-reports/assets/market_report_template.tex +0 -1380
  740. package/bin/skills/market-research-reports/assets/market_research.sty +0 -564
  741. package/bin/skills/market-research-reports/references/data_analysis_patterns.md +0 -548
  742. package/bin/skills/market-research-reports/references/report_structure_guide.md +0 -999
  743. package/bin/skills/market-research-reports/references/visual_generation_guide.md +0 -1077
  744. package/bin/skills/market-research-reports/scripts/generate_market_visuals.py +0 -472
  745. package/bin/skills/markitdown/INSTALLATION_GUIDE.md +0 -318
  746. package/bin/skills/markitdown/LICENSE.txt +0 -22
  747. package/bin/skills/markitdown/OPENROUTER_INTEGRATION.md +0 -359
  748. package/bin/skills/markitdown/QUICK_REFERENCE.md +0 -309
  749. package/bin/skills/markitdown/README.md +0 -184
  750. package/bin/skills/markitdown/SKILL.md +0 -486
  751. package/bin/skills/markitdown/SKILL_SUMMARY.md +0 -307
  752. package/bin/skills/markitdown/assets/example_usage.md +0 -463
  753. package/bin/skills/markitdown/references/api_reference.md +0 -399
  754. package/bin/skills/markitdown/references/file_formats.md +0 -542
  755. package/bin/skills/markitdown/scripts/batch_convert.py +0 -195
  756. package/bin/skills/markitdown/scripts/convert_literature.py +0 -262
  757. package/bin/skills/markitdown/scripts/convert_with_ai.py +0 -224
  758. package/bin/skills/matchms/SKILL.md +0 -203
  759. package/bin/skills/matchms/references/filtering.md +0 -288
  760. package/bin/skills/matchms/references/importing_exporting.md +0 -416
  761. package/bin/skills/matchms/references/similarity.md +0 -380
  762. package/bin/skills/matchms/references/workflows.md +0 -647
  763. package/bin/skills/matlab/SKILL.md +0 -376
  764. package/bin/skills/matlab/references/data-import-export.md +0 -479
  765. package/bin/skills/matlab/references/executing-scripts.md +0 -444
  766. package/bin/skills/matlab/references/graphics-visualization.md +0 -579
  767. package/bin/skills/matlab/references/mathematics.md +0 -553
  768. package/bin/skills/matlab/references/matrices-arrays.md +0 -349
  769. package/bin/skills/matlab/references/octave-compatibility.md +0 -544
  770. package/bin/skills/matlab/references/programming.md +0 -672
  771. package/bin/skills/matlab/references/python-integration.md +0 -433
  772. package/bin/skills/matplotlib/SKILL.md +0 -361
  773. package/bin/skills/matplotlib/references/api_reference.md +0 -412
  774. package/bin/skills/matplotlib/references/common_issues.md +0 -563
  775. package/bin/skills/matplotlib/references/plot_types.md +0 -476
  776. package/bin/skills/matplotlib/references/styling_guide.md +0 -589
  777. package/bin/skills/matplotlib/scripts/plot_template.py +0 -401
  778. package/bin/skills/matplotlib/scripts/style_configurator.py +0 -409
  779. package/bin/skills/medchem/SKILL.md +0 -406
  780. package/bin/skills/medchem/references/api_guide.md +0 -600
  781. package/bin/skills/medchem/references/rules_catalog.md +0 -604
  782. package/bin/skills/medchem/scripts/filter_molecules.py +0 -418
  783. package/bin/skills/megatron-core/SKILL.md +0 -366
  784. package/bin/skills/megatron-core/references/benchmarks.md +0 -249
  785. package/bin/skills/megatron-core/references/parallelism-guide.md +0 -404
  786. package/bin/skills/megatron-core/references/production-examples.md +0 -473
  787. package/bin/skills/megatron-core/references/training-recipes.md +0 -547
  788. package/bin/skills/metabolomics-workbench-database/SKILL.md +0 -259
  789. package/bin/skills/metabolomics-workbench-database/references/api_reference.md +0 -494
  790. package/bin/skills/miles/SKILL.md +0 -315
  791. package/bin/skills/miles/references/api-reference.md +0 -141
  792. package/bin/skills/miles/references/troubleshooting.md +0 -352
  793. package/bin/skills/ml-paper-writing/SKILL.md +0 -937
  794. package/bin/skills/ml-paper-writing/references/checklists.md +0 -361
  795. package/bin/skills/ml-paper-writing/references/citation-workflow.md +0 -562
  796. package/bin/skills/ml-paper-writing/references/reviewer-guidelines.md +0 -367
  797. package/bin/skills/ml-paper-writing/references/sources.md +0 -159
  798. package/bin/skills/ml-paper-writing/references/writing-guide.md +0 -476
  799. package/bin/skills/ml-paper-writing/templates/README.md +0 -251
  800. package/bin/skills/ml-paper-writing/templates/aaai2026/README.md +0 -534
  801. package/bin/skills/ml-paper-writing/templates/aaai2026/aaai2026-unified-supp.tex +0 -144
  802. package/bin/skills/ml-paper-writing/templates/aaai2026/aaai2026-unified-template.tex +0 -952
  803. package/bin/skills/ml-paper-writing/templates/aaai2026/aaai2026.bib +0 -111
  804. package/bin/skills/ml-paper-writing/templates/aaai2026/aaai2026.bst +0 -1493
  805. package/bin/skills/ml-paper-writing/templates/aaai2026/aaai2026.sty +0 -315
  806. package/bin/skills/ml-paper-writing/templates/acl/README.md +0 -50
  807. package/bin/skills/ml-paper-writing/templates/acl/acl.sty +0 -312
  808. package/bin/skills/ml-paper-writing/templates/acl/acl_latex.tex +0 -377
  809. package/bin/skills/ml-paper-writing/templates/acl/acl_lualatex.tex +0 -101
  810. package/bin/skills/ml-paper-writing/templates/acl/acl_natbib.bst +0 -1940
  811. package/bin/skills/ml-paper-writing/templates/acl/anthology.bib.txt +0 -26
  812. package/bin/skills/ml-paper-writing/templates/acl/custom.bib +0 -70
  813. package/bin/skills/ml-paper-writing/templates/acl/formatting.md +0 -326
  814. package/bin/skills/ml-paper-writing/templates/colm2025/README.md +0 -3
  815. package/bin/skills/ml-paper-writing/templates/colm2025/colm2025_conference.bib +0 -11
  816. package/bin/skills/ml-paper-writing/templates/colm2025/colm2025_conference.bst +0 -1440
  817. package/bin/skills/ml-paper-writing/templates/colm2025/colm2025_conference.pdf +0 -0
  818. package/bin/skills/ml-paper-writing/templates/colm2025/colm2025_conference.sty +0 -218
  819. package/bin/skills/ml-paper-writing/templates/colm2025/colm2025_conference.tex +0 -305
  820. package/bin/skills/ml-paper-writing/templates/colm2025/fancyhdr.sty +0 -485
  821. package/bin/skills/ml-paper-writing/templates/colm2025/math_commands.tex +0 -508
  822. package/bin/skills/ml-paper-writing/templates/colm2025/natbib.sty +0 -1246
  823. package/bin/skills/ml-paper-writing/templates/iclr2026/fancyhdr.sty +0 -485
  824. package/bin/skills/ml-paper-writing/templates/iclr2026/iclr2026_conference.bib +0 -24
  825. package/bin/skills/ml-paper-writing/templates/iclr2026/iclr2026_conference.bst +0 -1440
  826. package/bin/skills/ml-paper-writing/templates/iclr2026/iclr2026_conference.pdf +0 -0
  827. package/bin/skills/ml-paper-writing/templates/iclr2026/iclr2026_conference.sty +0 -246
  828. package/bin/skills/ml-paper-writing/templates/iclr2026/iclr2026_conference.tex +0 -414
  829. package/bin/skills/ml-paper-writing/templates/iclr2026/math_commands.tex +0 -508
  830. package/bin/skills/ml-paper-writing/templates/iclr2026/natbib.sty +0 -1246
  831. package/bin/skills/ml-paper-writing/templates/icml2026/algorithm.sty +0 -79
  832. package/bin/skills/ml-paper-writing/templates/icml2026/algorithmic.sty +0 -201
  833. package/bin/skills/ml-paper-writing/templates/icml2026/example_paper.bib +0 -75
  834. package/bin/skills/ml-paper-writing/templates/icml2026/example_paper.pdf +0 -0
  835. package/bin/skills/ml-paper-writing/templates/icml2026/example_paper.tex +0 -662
  836. package/bin/skills/ml-paper-writing/templates/icml2026/fancyhdr.sty +0 -864
  837. package/bin/skills/ml-paper-writing/templates/icml2026/icml2026.bst +0 -1443
  838. package/bin/skills/ml-paper-writing/templates/icml2026/icml2026.sty +0 -767
  839. package/bin/skills/ml-paper-writing/templates/icml2026/icml_numpapers.pdf +0 -0
  840. package/bin/skills/ml-paper-writing/templates/neurips2025/Makefile +0 -36
  841. package/bin/skills/ml-paper-writing/templates/neurips2025/extra_pkgs.tex +0 -53
  842. package/bin/skills/ml-paper-writing/templates/neurips2025/main.tex +0 -38
  843. package/bin/skills/ml-paper-writing/templates/neurips2025/neurips.sty +0 -382
  844. package/bin/skills/mlflow/SKILL.md +0 -704
  845. package/bin/skills/mlflow/references/deployment.md +0 -744
  846. package/bin/skills/mlflow/references/model-registry.md +0 -770
  847. package/bin/skills/mlflow/references/tracking.md +0 -680
  848. package/bin/skills/modal/SKILL.md +0 -418
  849. package/bin/skills/modal/references/advanced-patterns.md +0 -695
  850. package/bin/skills/modal/references/examples-catalog.md +0 -423
  851. package/bin/skills/modal/references/troubleshooting.md +0 -494
  852. package/bin/skills/modal-research-gpu/SKILL.md +0 -238
  853. package/bin/skills/model-economics/SKILL.md +0 -238
  854. package/bin/skills/model-merging/SKILL.md +0 -539
  855. package/bin/skills/model-merging/references/evaluation.md +0 -462
  856. package/bin/skills/model-merging/references/examples.md +0 -428
  857. package/bin/skills/model-merging/references/methods.md +0 -352
  858. package/bin/skills/model-pruning/SKILL.md +0 -495
  859. package/bin/skills/model-pruning/references/wanda.md +0 -347
  860. package/bin/skills/moe-training/SKILL.md +0 -526
  861. package/bin/skills/moe-training/references/architectures.md +0 -432
  862. package/bin/skills/moe-training/references/inference.md +0 -348
  863. package/bin/skills/moe-training/references/training.md +0 -425
  864. package/bin/skills/molfeat/SKILL.md +0 -511
  865. package/bin/skills/molfeat/references/api_reference.md +0 -428
  866. package/bin/skills/molfeat/references/available_featurizers.md +0 -333
  867. package/bin/skills/molfeat/references/examples.md +0 -723
  868. package/bin/skills/nanogpt/SKILL.md +0 -290
  869. package/bin/skills/nanogpt/references/architecture.md +0 -382
  870. package/bin/skills/nanogpt/references/data.md +0 -476
  871. package/bin/skills/nanogpt/references/training.md +0 -564
  872. package/bin/skills/nemo-curator/SKILL.md +0 -383
  873. package/bin/skills/nemo-curator/references/deduplication.md +0 -87
  874. package/bin/skills/nemo-curator/references/filtering.md +0 -102
  875. package/bin/skills/nemo-evaluator/SKILL.md +0 -494
  876. package/bin/skills/nemo-evaluator/references/adapter-system.md +0 -340
  877. package/bin/skills/nemo-evaluator/references/configuration.md +0 -447
  878. package/bin/skills/nemo-evaluator/references/custom-benchmarks.md +0 -315
  879. package/bin/skills/nemo-evaluator/references/execution-backends.md +0 -361
  880. package/bin/skills/nemo-guardrails/SKILL.md +0 -297
  881. package/bin/skills/networkx/SKILL.md +0 -437
  882. package/bin/skills/networkx/references/algorithms.md +0 -383
  883. package/bin/skills/networkx/references/generators.md +0 -378
  884. package/bin/skills/networkx/references/graph-basics.md +0 -283
  885. package/bin/skills/networkx/references/io.md +0 -441
  886. package/bin/skills/networkx/references/visualization.md +0 -529
  887. package/bin/skills/neurokit2/SKILL.md +0 -356
  888. package/bin/skills/neurokit2/references/bio_module.md +0 -417
  889. package/bin/skills/neurokit2/references/complexity.md +0 -715
  890. package/bin/skills/neurokit2/references/ecg_cardiac.md +0 -355
  891. package/bin/skills/neurokit2/references/eda.md +0 -497
  892. package/bin/skills/neurokit2/references/eeg.md +0 -506
  893. package/bin/skills/neurokit2/references/emg.md +0 -408
  894. package/bin/skills/neurokit2/references/eog.md +0 -407
  895. package/bin/skills/neurokit2/references/epochs_events.md +0 -471
  896. package/bin/skills/neurokit2/references/hrv.md +0 -480
  897. package/bin/skills/neurokit2/references/ppg.md +0 -413
  898. package/bin/skills/neurokit2/references/rsp.md +0 -510
  899. package/bin/skills/neurokit2/references/signal_processing.md +0 -648
  900. package/bin/skills/neuropixels-analysis/SKILL.md +0 -350
  901. package/bin/skills/neuropixels-analysis/assets/analysis_template.py +0 -271
  902. package/bin/skills/neuropixels-analysis/references/AI_CURATION.md +0 -345
  903. package/bin/skills/neuropixels-analysis/references/ANALYSIS.md +0 -392
  904. package/bin/skills/neuropixels-analysis/references/AUTOMATED_CURATION.md +0 -358
  905. package/bin/skills/neuropixels-analysis/references/MOTION_CORRECTION.md +0 -323
  906. package/bin/skills/neuropixels-analysis/references/PREPROCESSING.md +0 -273
  907. package/bin/skills/neuropixels-analysis/references/QUALITY_METRICS.md +0 -359
  908. package/bin/skills/neuropixels-analysis/references/SPIKE_SORTING.md +0 -339
  909. package/bin/skills/neuropixels-analysis/references/api_reference.md +0 -415
  910. package/bin/skills/neuropixels-analysis/references/plotting_guide.md +0 -454
  911. package/bin/skills/neuropixels-analysis/references/standard_workflow.md +0 -385
  912. package/bin/skills/neuropixels-analysis/scripts/compute_metrics.py +0 -178
  913. package/bin/skills/neuropixels-analysis/scripts/explore_recording.py +0 -168
  914. package/bin/skills/neuropixels-analysis/scripts/export_to_phy.py +0 -79
  915. package/bin/skills/neuropixels-analysis/scripts/neuropixels_pipeline.py +0 -432
  916. package/bin/skills/neuropixels-analysis/scripts/preprocess_recording.py +0 -122
  917. package/bin/skills/neuropixels-analysis/scripts/run_sorting.py +0 -98
  918. package/bin/skills/nnsight/SKILL.md +0 -436
  919. package/bin/skills/nnsight/references/README.md +0 -78
  920. package/bin/skills/nnsight/references/api.md +0 -344
  921. package/bin/skills/nnsight/references/tutorials.md +0 -300
  922. package/bin/skills/offer-k-dense-web/SKILL.md +0 -21
  923. package/bin/skills/omero-integration/SKILL.md +0 -251
  924. package/bin/skills/omero-integration/references/advanced.md +0 -631
  925. package/bin/skills/omero-integration/references/connection.md +0 -369
  926. package/bin/skills/omero-integration/references/data_access.md +0 -544
  927. package/bin/skills/omero-integration/references/image_processing.md +0 -665
  928. package/bin/skills/omero-integration/references/metadata.md +0 -688
  929. package/bin/skills/omero-integration/references/rois.md +0 -648
  930. package/bin/skills/omero-integration/references/scripts.md +0 -637
  931. package/bin/skills/omero-integration/references/tables.md +0 -532
  932. package/bin/skills/openalex-database/SKILL.md +0 -494
  933. package/bin/skills/openalex-database/references/api_guide.md +0 -371
  934. package/bin/skills/openalex-database/references/common_queries.md +0 -381
  935. package/bin/skills/openalex-database/scripts/openalex_client.py +0 -337
  936. package/bin/skills/openalex-database/scripts/query_helpers.py +0 -306
  937. package/bin/skills/openrlhf/SKILL.md +0 -249
  938. package/bin/skills/openrlhf/references/algorithm-comparison.md +0 -404
  939. package/bin/skills/openrlhf/references/custom-rewards.md +0 -530
  940. package/bin/skills/openrlhf/references/hybrid-engine.md +0 -287
  941. package/bin/skills/openrlhf/references/multi-node-training.md +0 -454
  942. package/bin/skills/opentargets-database/SKILL.md +0 -373
  943. package/bin/skills/opentargets-database/references/api_reference.md +0 -249
  944. package/bin/skills/opentargets-database/references/evidence_types.md +0 -306
  945. package/bin/skills/opentargets-database/references/target_annotations.md +0 -401
  946. package/bin/skills/opentargets-database/scripts/query_opentargets.py +0 -403
  947. package/bin/skills/opentrons-integration/SKILL.md +0 -573
  948. package/bin/skills/opentrons-integration/references/api_reference.md +0 -366
  949. package/bin/skills/opentrons-integration/scripts/basic_protocol_template.py +0 -67
  950. package/bin/skills/opentrons-integration/scripts/pcr_setup_template.py +0 -154
  951. package/bin/skills/opentrons-integration/scripts/serial_dilution_template.py +0 -96
  952. package/bin/skills/outlines/SKILL.md +0 -652
  953. package/bin/skills/outlines/references/backends.md +0 -615
  954. package/bin/skills/outlines/references/examples.md +0 -773
  955. package/bin/skills/outlines/references/json_generation.md +0 -652
  956. package/bin/skills/paper-2-web/SKILL.md +0 -491
  957. package/bin/skills/paper-2-web/references/installation.md +0 -141
  958. package/bin/skills/paper-2-web/references/paper2poster.md +0 -346
  959. package/bin/skills/paper-2-web/references/paper2video.md +0 -305
  960. package/bin/skills/paper-2-web/references/paper2web.md +0 -187
  961. package/bin/skills/paper-2-web/references/usage_examples.md +0 -436
  962. package/bin/skills/pathml/SKILL.md +0 -166
  963. package/bin/skills/pathml/references/data_management.md +0 -742
  964. package/bin/skills/pathml/references/graphs.md +0 -653
  965. package/bin/skills/pathml/references/image_loading.md +0 -448
  966. package/bin/skills/pathml/references/machine_learning.md +0 -725
  967. package/bin/skills/pathml/references/multiparametric.md +0 -686
  968. package/bin/skills/pathml/references/preprocessing.md +0 -722
  969. package/bin/skills/pdb-database/SKILL.md +0 -309
  970. package/bin/skills/pdb-database/references/api_reference.md +0 -617
  971. package/bin/skills/peer-review/SKILL.md +0 -702
  972. package/bin/skills/peer-review/references/calibration_guidelines.md +0 -196
  973. package/bin/skills/peer-review/references/common_issues.md +0 -552
  974. package/bin/skills/peer-review/references/paper_mechanics.md +0 -269
  975. package/bin/skills/peer-review/references/reporting_standards.md +0 -290
  976. package/bin/skills/peer-review/references/scoring_rubric.md +0 -239
  977. package/bin/skills/peft/SKILL.md +0 -431
  978. package/bin/skills/peft/references/advanced-usage.md +0 -514
  979. package/bin/skills/peft/references/troubleshooting.md +0 -480
  980. package/bin/skills/pennylane/SKILL.md +0 -226
  981. package/bin/skills/pennylane/references/advanced_features.md +0 -667
  982. package/bin/skills/pennylane/references/devices_backends.md +0 -596
  983. package/bin/skills/pennylane/references/getting_started.md +0 -227
  984. package/bin/skills/pennylane/references/optimization.md +0 -671
  985. package/bin/skills/pennylane/references/quantum_chemistry.md +0 -567
  986. package/bin/skills/pennylane/references/quantum_circuits.md +0 -437
  987. package/bin/skills/pennylane/references/quantum_ml.md +0 -571
  988. package/bin/skills/perplexity-search/SKILL.md +0 -448
  989. package/bin/skills/perplexity-search/assets/.env.example +0 -16
  990. package/bin/skills/perplexity-search/references/model_comparison.md +0 -386
  991. package/bin/skills/perplexity-search/references/openrouter_setup.md +0 -454
  992. package/bin/skills/perplexity-search/references/search_strategies.md +0 -258
  993. package/bin/skills/perplexity-search/scripts/perplexity_search.py +0 -277
  994. package/bin/skills/perplexity-search/scripts/setup_env.py +0 -171
  995. package/bin/skills/phoenix/SKILL.md +0 -475
  996. package/bin/skills/phoenix/references/advanced-usage.md +0 -619
  997. package/bin/skills/phoenix/references/troubleshooting.md +0 -538
  998. package/bin/skills/pinecone/SKILL.md +0 -358
  999. package/bin/skills/pinecone/references/deployment.md +0 -181
  1000. package/bin/skills/plotly/SKILL.md +0 -267
  1001. package/bin/skills/plotly/references/chart-types.md +0 -488
  1002. package/bin/skills/plotly/references/export-interactivity.md +0 -453
  1003. package/bin/skills/plotly/references/graph-objects.md +0 -302
  1004. package/bin/skills/plotly/references/layouts-styling.md +0 -457
  1005. package/bin/skills/plotly/references/plotly-express.md +0 -213
  1006. package/bin/skills/polars/SKILL.md +0 -387
  1007. package/bin/skills/polars/references/best_practices.md +0 -649
  1008. package/bin/skills/polars/references/core_concepts.md +0 -378
  1009. package/bin/skills/polars/references/io_guide.md +0 -557
  1010. package/bin/skills/polars/references/operations.md +0 -602
  1011. package/bin/skills/polars/references/pandas_migration.md +0 -417
  1012. package/bin/skills/polars/references/transformations.md +0 -549
  1013. package/bin/skills/pptx-posters/SKILL.md +0 -410
  1014. package/bin/skills/pptx-posters/assets/poster_html_template.html +0 -257
  1015. package/bin/skills/pptx-posters/assets/poster_quality_checklist.md +0 -358
  1016. package/bin/skills/pptx-posters/references/poster_content_guide.md +0 -748
  1017. package/bin/skills/pptx-posters/references/poster_design_principles.md +0 -806
  1018. package/bin/skills/pptx-posters/references/poster_layout_design.md +0 -900
  1019. package/bin/skills/prime-intellect-lab/README.md +0 -69
  1020. package/bin/skills/prime-intellect-lab/SKILL.md +0 -598
  1021. package/bin/skills/prime-intellect-lab/templates/basic_rl_training.toml +0 -82
  1022. package/bin/skills/protocolsio-integration/SKILL.md +0 -421
  1023. package/bin/skills/protocolsio-integration/references/additional_features.md +0 -387
  1024. package/bin/skills/protocolsio-integration/references/authentication.md +0 -100
  1025. package/bin/skills/protocolsio-integration/references/discussions.md +0 -225
  1026. package/bin/skills/protocolsio-integration/references/file_manager.md +0 -412
  1027. package/bin/skills/protocolsio-integration/references/protocols_api.md +0 -294
  1028. package/bin/skills/protocolsio-integration/references/workspaces.md +0 -293
  1029. package/bin/skills/pubchem-database/SKILL.md +0 -574
  1030. package/bin/skills/pubchem-database/references/api_reference.md +0 -440
  1031. package/bin/skills/pubchem-database/scripts/bioactivity_query.py +0 -367
  1032. package/bin/skills/pubchem-database/scripts/compound_search.py +0 -297
  1033. package/bin/skills/pubmed-database/SKILL.md +0 -460
  1034. package/bin/skills/pubmed-database/references/api_reference.md +0 -298
  1035. package/bin/skills/pubmed-database/references/common_queries.md +0 -453
  1036. package/bin/skills/pubmed-database/references/search_syntax.md +0 -436
  1037. package/bin/skills/pufferlib/SKILL.md +0 -436
  1038. package/bin/skills/pufferlib/references/environments.md +0 -508
  1039. package/bin/skills/pufferlib/references/integration.md +0 -621
  1040. package/bin/skills/pufferlib/references/policies.md +0 -653
  1041. package/bin/skills/pufferlib/references/training.md +0 -360
  1042. package/bin/skills/pufferlib/references/vectorization.md +0 -557
  1043. package/bin/skills/pufferlib/scripts/env_template.py +0 -340
  1044. package/bin/skills/pufferlib/scripts/train_template.py +0 -239
  1045. package/bin/skills/pydeseq2/SKILL.md +0 -559
  1046. package/bin/skills/pydeseq2/references/api_reference.md +0 -228
  1047. package/bin/skills/pydeseq2/references/workflow_guide.md +0 -582
  1048. package/bin/skills/pydeseq2/scripts/run_deseq2_analysis.py +0 -353
  1049. package/bin/skills/pydicom/SKILL.md +0 -434
  1050. package/bin/skills/pydicom/references/common_tags.md +0 -228
  1051. package/bin/skills/pydicom/references/transfer_syntaxes.md +0 -352
  1052. package/bin/skills/pydicom/scripts/anonymize_dicom.py +0 -137
  1053. package/bin/skills/pydicom/scripts/dicom_to_image.py +0 -172
  1054. package/bin/skills/pydicom/scripts/extract_metadata.py +0 -173
  1055. package/bin/skills/pyhealth/SKILL.md +0 -491
  1056. package/bin/skills/pyhealth/references/datasets.md +0 -178
  1057. package/bin/skills/pyhealth/references/medical_coding.md +0 -284
  1058. package/bin/skills/pyhealth/references/models.md +0 -594
  1059. package/bin/skills/pyhealth/references/preprocessing.md +0 -638
  1060. package/bin/skills/pyhealth/references/tasks.md +0 -379
  1061. package/bin/skills/pyhealth/references/training_evaluation.md +0 -648
  1062. package/bin/skills/pylabrobot/SKILL.md +0 -185
  1063. package/bin/skills/pylabrobot/references/analytical-equipment.md +0 -464
  1064. package/bin/skills/pylabrobot/references/hardware-backends.md +0 -480
  1065. package/bin/skills/pylabrobot/references/liquid-handling.md +0 -403
  1066. package/bin/skills/pylabrobot/references/material-handling.md +0 -620
  1067. package/bin/skills/pylabrobot/references/resources.md +0 -489
  1068. package/bin/skills/pylabrobot/references/visualization.md +0 -532
  1069. package/bin/skills/pymatgen/SKILL.md +0 -691
  1070. package/bin/skills/pymatgen/references/analysis_modules.md +0 -530
  1071. package/bin/skills/pymatgen/references/core_classes.md +0 -318
  1072. package/bin/skills/pymatgen/references/io_formats.md +0 -469
  1073. package/bin/skills/pymatgen/references/materials_project_api.md +0 -517
  1074. package/bin/skills/pymatgen/references/transformations_workflows.md +0 -591
  1075. package/bin/skills/pymatgen/scripts/phase_diagram_generator.py +0 -233
  1076. package/bin/skills/pymatgen/scripts/structure_analyzer.py +0 -266
  1077. package/bin/skills/pymatgen/scripts/structure_converter.py +0 -169
  1078. package/bin/skills/pymc/SKILL.md +0 -572
  1079. package/bin/skills/pymc/assets/hierarchical_model_template.py +0 -333
  1080. package/bin/skills/pymc/assets/linear_regression_template.py +0 -241
  1081. package/bin/skills/pymc/references/distributions.md +0 -320
  1082. package/bin/skills/pymc/references/sampling_inference.md +0 -424
  1083. package/bin/skills/pymc/references/workflows.md +0 -526
  1084. package/bin/skills/pymc/scripts/model_comparison.py +0 -387
  1085. package/bin/skills/pymc/scripts/model_diagnostics.py +0 -350
  1086. package/bin/skills/pymoo/SKILL.md +0 -571
  1087. package/bin/skills/pymoo/references/algorithms.md +0 -180
  1088. package/bin/skills/pymoo/references/constraints_mcdm.md +0 -417
  1089. package/bin/skills/pymoo/references/operators.md +0 -345
  1090. package/bin/skills/pymoo/references/problems.md +0 -265
  1091. package/bin/skills/pymoo/references/visualization.md +0 -353
  1092. package/bin/skills/pymoo/scripts/custom_problem_example.py +0 -181
  1093. package/bin/skills/pymoo/scripts/decision_making_example.py +0 -161
  1094. package/bin/skills/pymoo/scripts/many_objective_example.py +0 -72
  1095. package/bin/skills/pymoo/scripts/multi_objective_example.py +0 -63
  1096. package/bin/skills/pymoo/scripts/single_objective_example.py +0 -59
  1097. package/bin/skills/pyopenms/SKILL.md +0 -217
  1098. package/bin/skills/pyopenms/references/data_structures.md +0 -497
  1099. package/bin/skills/pyopenms/references/feature_detection.md +0 -410
  1100. package/bin/skills/pyopenms/references/file_io.md +0 -349
  1101. package/bin/skills/pyopenms/references/identification.md +0 -422
  1102. package/bin/skills/pyopenms/references/metabolomics.md +0 -482
  1103. package/bin/skills/pyopenms/references/signal_processing.md +0 -433
  1104. package/bin/skills/pysam/SKILL.md +0 -265
  1105. package/bin/skills/pysam/references/alignment_files.md +0 -280
  1106. package/bin/skills/pysam/references/common_workflows.md +0 -520
  1107. package/bin/skills/pysam/references/sequence_files.md +0 -407
  1108. package/bin/skills/pysam/references/variant_files.md +0 -365
  1109. package/bin/skills/pytdc/SKILL.md +0 -460
  1110. package/bin/skills/pytdc/references/datasets.md +0 -246
  1111. package/bin/skills/pytdc/references/oracles.md +0 -400
  1112. package/bin/skills/pytdc/references/utilities.md +0 -684
  1113. package/bin/skills/pytdc/scripts/benchmark_evaluation.py +0 -327
  1114. package/bin/skills/pytdc/scripts/load_and_split_data.py +0 -214
  1115. package/bin/skills/pytdc/scripts/molecular_generation.py +0 -404
  1116. package/bin/skills/pytorch-fsdp/SKILL.md +0 -126
  1117. package/bin/skills/pytorch-fsdp/references/index.md +0 -7
  1118. package/bin/skills/pytorch-fsdp/references/other.md +0 -4249
  1119. package/bin/skills/pytorch-lightning/SKILL.md +0 -346
  1120. package/bin/skills/pytorch-lightning/references/callbacks.md +0 -436
  1121. package/bin/skills/pytorch-lightning/references/distributed.md +0 -490
  1122. package/bin/skills/pytorch-lightning/references/hyperparameter-tuning.md +0 -556
  1123. package/bin/skills/pyvene/SKILL.md +0 -473
  1124. package/bin/skills/pyvene/references/README.md +0 -73
  1125. package/bin/skills/pyvene/references/api.md +0 -383
  1126. package/bin/skills/pyvene/references/tutorials.md +0 -376
  1127. package/bin/skills/qdrant/SKILL.md +0 -493
  1128. package/bin/skills/qdrant/references/advanced-usage.md +0 -648
  1129. package/bin/skills/qdrant/references/troubleshooting.md +0 -631
  1130. package/bin/skills/qiskit/SKILL.md +0 -275
  1131. package/bin/skills/qiskit/references/algorithms.md +0 -607
  1132. package/bin/skills/qiskit/references/backends.md +0 -433
  1133. package/bin/skills/qiskit/references/circuits.md +0 -197
  1134. package/bin/skills/qiskit/references/patterns.md +0 -533
  1135. package/bin/skills/qiskit/references/primitives.md +0 -277
  1136. package/bin/skills/qiskit/references/setup.md +0 -99
  1137. package/bin/skills/qiskit/references/transpilation.md +0 -286
  1138. package/bin/skills/qiskit/references/visualization.md +0 -415
  1139. package/bin/skills/qutip/SKILL.md +0 -318
  1140. package/bin/skills/qutip/references/advanced.md +0 -555
  1141. package/bin/skills/qutip/references/analysis.md +0 -523
  1142. package/bin/skills/qutip/references/core_concepts.md +0 -293
  1143. package/bin/skills/qutip/references/time_evolution.md +0 -348
  1144. package/bin/skills/qutip/references/visualization.md +0 -431
  1145. package/bin/skills/ray-data/SKILL.md +0 -326
  1146. package/bin/skills/ray-data/references/integration.md +0 -82
  1147. package/bin/skills/ray-data/references/transformations.md +0 -83
  1148. package/bin/skills/ray-train/SKILL.md +0 -406
  1149. package/bin/skills/ray-train/references/multi-node.md +0 -628
  1150. package/bin/skills/rdkit/SKILL.md +0 -780
  1151. package/bin/skills/rdkit/references/api_reference.md +0 -432
  1152. package/bin/skills/rdkit/references/descriptors_reference.md +0 -595
  1153. package/bin/skills/rdkit/references/smarts_patterns.md +0 -668
  1154. package/bin/skills/rdkit/scripts/molecular_properties.py +0 -243
  1155. package/bin/skills/rdkit/scripts/similarity_search.py +0 -297
  1156. package/bin/skills/rdkit/scripts/substructure_filter.py +0 -386
  1157. package/bin/skills/reactome-database/SKILL.md +0 -278
  1158. package/bin/skills/reactome-database/references/api_reference.md +0 -465
  1159. package/bin/skills/reactome-database/scripts/reactome_query.py +0 -286
  1160. package/bin/skills/research-grants/README.md +0 -285
  1161. package/bin/skills/research-grants/SKILL.md +0 -938
  1162. package/bin/skills/research-grants/assets/budget_justification_template.md +0 -453
  1163. package/bin/skills/research-grants/assets/nih_specific_aims_template.md +0 -166
  1164. package/bin/skills/research-grants/assets/nsf_project_summary_template.md +0 -92
  1165. package/bin/skills/research-grants/references/broader_impacts.md +0 -392
  1166. package/bin/skills/research-grants/references/darpa_guidelines.md +0 -636
  1167. package/bin/skills/research-grants/references/doe_guidelines.md +0 -586
  1168. package/bin/skills/research-grants/references/nih_guidelines.md +0 -851
  1169. package/bin/skills/research-grants/references/nsf_guidelines.md +0 -570
  1170. package/bin/skills/research-grants/references/specific_aims_guide.md +0 -458
  1171. package/bin/skills/research-lookup/README.md +0 -156
  1172. package/bin/skills/research-lookup/SKILL.md +0 -606
  1173. package/bin/skills/research-lookup/examples.py +0 -174
  1174. package/bin/skills/research-lookup/lookup.py +0 -187
  1175. package/bin/skills/research-lookup/research_lookup.py +0 -483
  1176. package/bin/skills/research-lookup/scripts/research_lookup.py +0 -483
  1177. package/bin/skills/rowan/SKILL.md +0 -427
  1178. package/bin/skills/rowan/references/api_reference.md +0 -413
  1179. package/bin/skills/rowan/references/molecule_handling.md +0 -429
  1180. package/bin/skills/rowan/references/proteins_and_organization.md +0 -499
  1181. package/bin/skills/rowan/references/rdkit_native.md +0 -438
  1182. package/bin/skills/rowan/references/results_interpretation.md +0 -481
  1183. package/bin/skills/rowan/references/workflow_types.md +0 -591
  1184. package/bin/skills/rwkv/SKILL.md +0 -260
  1185. package/bin/skills/rwkv/references/architecture-details.md +0 -344
  1186. package/bin/skills/rwkv/references/rwkv7.md +0 -386
  1187. package/bin/skills/rwkv/references/state-management.md +0 -369
  1188. package/bin/skills/saelens/SKILL.md +0 -386
  1189. package/bin/skills/saelens/references/README.md +0 -70
  1190. package/bin/skills/saelens/references/api.md +0 -333
  1191. package/bin/skills/saelens/references/tutorials.md +0 -318
  1192. package/bin/skills/scanpy/SKILL.md +0 -386
  1193. package/bin/skills/scanpy/assets/analysis_template.py +0 -295
  1194. package/bin/skills/scanpy/references/api_reference.md +0 -251
  1195. package/bin/skills/scanpy/references/plotting_guide.md +0 -352
  1196. package/bin/skills/scanpy/references/standard_workflow.md +0 -206
  1197. package/bin/skills/scanpy/scripts/qc_analysis.py +0 -200
  1198. package/bin/skills/scholar-evaluation/SKILL.md +0 -289
  1199. package/bin/skills/scholar-evaluation/references/evaluation_framework.md +0 -663
  1200. package/bin/skills/scholar-evaluation/scripts/calculate_scores.py +0 -366
  1201. package/bin/skills/scientific-brainstorming/SKILL.md +0 -191
  1202. package/bin/skills/scientific-brainstorming/references/brainstorming_methods.md +0 -326
  1203. package/bin/skills/scientific-critical-thinking/SKILL.md +0 -566
  1204. package/bin/skills/scientific-critical-thinking/references/common_biases.md +0 -364
  1205. package/bin/skills/scientific-critical-thinking/references/evidence_hierarchy.md +0 -484
  1206. package/bin/skills/scientific-critical-thinking/references/experimental_design.md +0 -496
  1207. package/bin/skills/scientific-critical-thinking/references/logical_fallacies.md +0 -478
  1208. package/bin/skills/scientific-critical-thinking/references/scientific_method.md +0 -169
  1209. package/bin/skills/scientific-critical-thinking/references/statistical_pitfalls.md +0 -506
  1210. package/bin/skills/scientific-schematics/QUICK_REFERENCE.md +0 -207
  1211. package/bin/skills/scientific-schematics/README.md +0 -327
  1212. package/bin/skills/scientific-schematics/SKILL.md +0 -615
  1213. package/bin/skills/scientific-schematics/example_usage.sh +0 -89
  1214. package/bin/skills/scientific-schematics/references/best_practices.md +0 -559
  1215. package/bin/skills/scientific-schematics/scripts/generate_schematic.py +0 -135
  1216. package/bin/skills/scientific-schematics/scripts/generate_schematic_ai.py +0 -837
  1217. package/bin/skills/scientific-schematics/test_ai_generation.py +0 -243
  1218. package/bin/skills/scientific-slides/SKILL.md +0 -942
  1219. package/bin/skills/scientific-slides/assets/timing_guidelines.md +0 -597
  1220. package/bin/skills/scientific-slides/references/data_visualization_slides.md +0 -708
  1221. package/bin/skills/scientific-slides/references/presentation_structure.md +0 -642
  1222. package/bin/skills/scientific-slides/references/slide_design_principles.md +0 -849
  1223. package/bin/skills/scientific-slides/references/talk_types_guide.md +0 -687
  1224. package/bin/skills/scientific-slides/references/visual_review_workflow.md +0 -775
  1225. package/bin/skills/scientific-slides/scripts/generate_slide_image.py +0 -143
  1226. package/bin/skills/scientific-slides/scripts/generate_slide_image_ai.py +0 -748
  1227. package/bin/skills/scientific-slides/scripts/pdf_to_images.py +0 -201
  1228. package/bin/skills/scientific-slides/scripts/slides_to_pdf.py +0 -220
  1229. package/bin/skills/scientific-slides/scripts/validate_presentation.py +0 -367
  1230. package/bin/skills/scientific-visualization/SKILL.md +0 -779
  1231. package/bin/skills/scientific-visualization/assets/color_palettes.py +0 -197
  1232. package/bin/skills/scientific-visualization/assets/nature.mplstyle +0 -63
  1233. package/bin/skills/scientific-visualization/assets/presentation.mplstyle +0 -61
  1234. package/bin/skills/scientific-visualization/assets/publication.mplstyle +0 -68
  1235. package/bin/skills/scientific-visualization/references/color_palettes.md +0 -348
  1236. package/bin/skills/scientific-visualization/references/journal_requirements.md +0 -320
  1237. package/bin/skills/scientific-visualization/references/matplotlib_examples.md +0 -620
  1238. package/bin/skills/scientific-visualization/references/publication_guidelines.md +0 -205
  1239. package/bin/skills/scientific-visualization/scripts/figure_export.py +0 -343
  1240. package/bin/skills/scientific-visualization/scripts/style_presets.py +0 -416
  1241. package/bin/skills/scientific-writing/SKILL.md +0 -714
  1242. package/bin/skills/scientific-writing/assets/REPORT_FORMATTING_GUIDE.md +0 -574
  1243. package/bin/skills/scientific-writing/assets/scientific_report.sty +0 -606
  1244. package/bin/skills/scientific-writing/assets/scientific_report_template.tex +0 -449
  1245. package/bin/skills/scientific-writing/references/citation_styles.md +0 -720
  1246. package/bin/skills/scientific-writing/references/figures_tables.md +0 -806
  1247. package/bin/skills/scientific-writing/references/imrad_structure.md +0 -686
  1248. package/bin/skills/scientific-writing/references/professional_report_formatting.md +0 -664
  1249. package/bin/skills/scientific-writing/references/reporting_guidelines.md +0 -748
  1250. package/bin/skills/scientific-writing/references/writing_principles.md +0 -824
  1251. package/bin/skills/scikit-bio/SKILL.md +0 -437
  1252. package/bin/skills/scikit-bio/references/api_reference.md +0 -749
  1253. package/bin/skills/scikit-learn/SKILL.md +0 -521
  1254. package/bin/skills/scikit-learn/references/model_evaluation.md +0 -592
  1255. package/bin/skills/scikit-learn/references/pipelines_and_composition.md +0 -612
  1256. package/bin/skills/scikit-learn/references/preprocessing.md +0 -606
  1257. package/bin/skills/scikit-learn/references/quick_reference.md +0 -433
  1258. package/bin/skills/scikit-learn/references/supervised_learning.md +0 -378
  1259. package/bin/skills/scikit-learn/references/unsupervised_learning.md +0 -505
  1260. package/bin/skills/scikit-learn/scripts/classification_pipeline.py +0 -257
  1261. package/bin/skills/scikit-learn/scripts/clustering_analysis.py +0 -386
  1262. package/bin/skills/scikit-survival/SKILL.md +0 -399
  1263. package/bin/skills/scikit-survival/references/competing-risks.md +0 -397
  1264. package/bin/skills/scikit-survival/references/cox-models.md +0 -182
  1265. package/bin/skills/scikit-survival/references/data-handling.md +0 -494
  1266. package/bin/skills/scikit-survival/references/ensemble-models.md +0 -327
  1267. package/bin/skills/scikit-survival/references/evaluation-metrics.md +0 -378
  1268. package/bin/skills/scikit-survival/references/svm-models.md +0 -411
  1269. package/bin/skills/scvi-tools/SKILL.md +0 -190
  1270. package/bin/skills/scvi-tools/references/differential-expression.md +0 -581
  1271. package/bin/skills/scvi-tools/references/models-atac-seq.md +0 -321
  1272. package/bin/skills/scvi-tools/references/models-multimodal.md +0 -367
  1273. package/bin/skills/scvi-tools/references/models-scrna-seq.md +0 -330
  1274. package/bin/skills/scvi-tools/references/models-spatial.md +0 -438
  1275. package/bin/skills/scvi-tools/references/models-specialized.md +0 -408
  1276. package/bin/skills/scvi-tools/references/theoretical-foundations.md +0 -438
  1277. package/bin/skills/scvi-tools/references/workflows.md +0 -546
  1278. package/bin/skills/seaborn/SKILL.md +0 -673
  1279. package/bin/skills/seaborn/references/examples.md +0 -822
  1280. package/bin/skills/seaborn/references/function_reference.md +0 -770
  1281. package/bin/skills/seaborn/references/objects_interface.md +0 -964
  1282. package/bin/skills/segment-anything/SKILL.md +0 -500
  1283. package/bin/skills/segment-anything/references/advanced-usage.md +0 -589
  1284. package/bin/skills/segment-anything/references/troubleshooting.md +0 -484
  1285. package/bin/skills/sentence-transformers/SKILL.md +0 -255
  1286. package/bin/skills/sentence-transformers/references/models.md +0 -123
  1287. package/bin/skills/sentencepiece/SKILL.md +0 -235
  1288. package/bin/skills/sentencepiece/references/algorithms.md +0 -200
  1289. package/bin/skills/sentencepiece/references/training.md +0 -304
  1290. package/bin/skills/sglang/SKILL.md +0 -442
  1291. package/bin/skills/sglang/references/deployment.md +0 -490
  1292. package/bin/skills/sglang/references/radix-attention.md +0 -413
  1293. package/bin/skills/sglang/references/structured-generation.md +0 -541
  1294. package/bin/skills/shap/SKILL.md +0 -566
  1295. package/bin/skills/shap/references/explainers.md +0 -339
  1296. package/bin/skills/shap/references/plots.md +0 -507
  1297. package/bin/skills/shap/references/theory.md +0 -449
  1298. package/bin/skills/shap/references/workflows.md +0 -605
  1299. package/bin/skills/simpo/SKILL.md +0 -219
  1300. package/bin/skills/simpo/references/datasets.md +0 -478
  1301. package/bin/skills/simpo/references/hyperparameters.md +0 -452
  1302. package/bin/skills/simpo/references/loss-functions.md +0 -350
  1303. package/bin/skills/simpy/SKILL.md +0 -429
  1304. package/bin/skills/simpy/references/events.md +0 -374
  1305. package/bin/skills/simpy/references/monitoring.md +0 -475
  1306. package/bin/skills/simpy/references/process-interaction.md +0 -424
  1307. package/bin/skills/simpy/references/real-time.md +0 -395
  1308. package/bin/skills/simpy/references/resources.md +0 -275
  1309. package/bin/skills/simpy/scripts/basic_simulation_template.py +0 -193
  1310. package/bin/skills/simpy/scripts/resource_monitor.py +0 -345
  1311. package/bin/skills/skypilot/SKILL.md +0 -509
  1312. package/bin/skills/skypilot/references/advanced-usage.md +0 -491
  1313. package/bin/skills/skypilot/references/troubleshooting.md +0 -570
  1314. package/bin/skills/slime/SKILL.md +0 -464
  1315. package/bin/skills/slime/references/api-reference.md +0 -392
  1316. package/bin/skills/slime/references/troubleshooting.md +0 -386
  1317. package/bin/skills/speculative-decoding/SKILL.md +0 -467
  1318. package/bin/skills/speculative-decoding/references/lookahead.md +0 -309
  1319. package/bin/skills/speculative-decoding/references/medusa.md +0 -350
  1320. package/bin/skills/stable-baselines3/SKILL.md +0 -299
  1321. package/bin/skills/stable-baselines3/references/algorithms.md +0 -333
  1322. package/bin/skills/stable-baselines3/references/callbacks.md +0 -556
  1323. package/bin/skills/stable-baselines3/references/custom_environments.md +0 -526
  1324. package/bin/skills/stable-baselines3/references/vectorized_envs.md +0 -568
  1325. package/bin/skills/stable-baselines3/scripts/custom_env_template.py +0 -314
  1326. package/bin/skills/stable-baselines3/scripts/evaluate_agent.py +0 -245
  1327. package/bin/skills/stable-baselines3/scripts/train_rl_agent.py +0 -165
  1328. package/bin/skills/stable-diffusion/SKILL.md +0 -519
  1329. package/bin/skills/stable-diffusion/references/advanced-usage.md +0 -716
  1330. package/bin/skills/stable-diffusion/references/troubleshooting.md +0 -555
  1331. package/bin/skills/statistical-analysis/SKILL.md +0 -632
  1332. package/bin/skills/statistical-analysis/references/assumptions_and_diagnostics.md +0 -369
  1333. package/bin/skills/statistical-analysis/references/bayesian_statistics.md +0 -661
  1334. package/bin/skills/statistical-analysis/references/effect_sizes_and_power.md +0 -581
  1335. package/bin/skills/statistical-analysis/references/reporting_standards.md +0 -469
  1336. package/bin/skills/statistical-analysis/references/test_selection_guide.md +0 -129
  1337. package/bin/skills/statistical-analysis/scripts/assumption_checks.py +0 -539
  1338. package/bin/skills/statsmodels/SKILL.md +0 -614
  1339. package/bin/skills/statsmodels/references/discrete_choice.md +0 -669
  1340. package/bin/skills/statsmodels/references/glm.md +0 -619
  1341. package/bin/skills/statsmodels/references/linear_models.md +0 -447
  1342. package/bin/skills/statsmodels/references/stats_diagnostics.md +0 -859
  1343. package/bin/skills/statsmodels/references/time_series.md +0 -716
  1344. package/bin/skills/string-database/SKILL.md +0 -534
  1345. package/bin/skills/string-database/references/string_reference.md +0 -455
  1346. package/bin/skills/string-database/scripts/string_api.py +0 -369
  1347. package/bin/skills/sympy/SKILL.md +0 -500
  1348. package/bin/skills/sympy/references/advanced-topics.md +0 -635
  1349. package/bin/skills/sympy/references/code-generation-printing.md +0 -599
  1350. package/bin/skills/sympy/references/core-capabilities.md +0 -348
  1351. package/bin/skills/sympy/references/matrices-linear-algebra.md +0 -526
  1352. package/bin/skills/sympy/references/physics-mechanics.md +0 -592
  1353. package/bin/skills/tensorboard/SKILL.md +0 -629
  1354. package/bin/skills/tensorboard/references/integrations.md +0 -638
  1355. package/bin/skills/tensorboard/references/profiling.md +0 -545
  1356. package/bin/skills/tensorboard/references/visualization.md +0 -620
  1357. package/bin/skills/tensorpool/SKILL.md +0 -519
  1358. package/bin/skills/tensorrt-llm/SKILL.md +0 -187
  1359. package/bin/skills/tensorrt-llm/references/multi-gpu.md +0 -298
  1360. package/bin/skills/tensorrt-llm/references/optimization.md +0 -242
  1361. package/bin/skills/tensorrt-llm/references/serving.md +0 -470
  1362. package/bin/skills/tinker/SKILL.md +0 -466
  1363. package/bin/skills/tinker/references/api-reference.md +0 -168
  1364. package/bin/skills/tinker/references/dpo-and-preference.md +0 -174
  1365. package/bin/skills/tinker/references/evaluations.md +0 -183
  1366. package/bin/skills/tinker/references/getting-started.md +0 -157
  1367. package/bin/skills/tinker/references/loss-functions.md +0 -163
  1368. package/bin/skills/tinker/references/models-and-lora.md +0 -148
  1369. package/bin/skills/tinker/references/recipes.md +0 -326
  1370. package/bin/skills/tinker/references/reinforcement-learning.md +0 -357
  1371. package/bin/skills/tinker/references/rendering.md +0 -255
  1372. package/bin/skills/tinker/references/supervised-learning.md +0 -256
  1373. package/bin/skills/tinker-training-cost/SKILL.md +0 -187
  1374. package/bin/skills/tinker-training-cost/scripts/calculate_cost.py +0 -123
  1375. package/bin/skills/together-ai/SKILL.md +0 -722
  1376. package/bin/skills/torch_geometric/SKILL.md +0 -676
  1377. package/bin/skills/torch_geometric/references/datasets_reference.md +0 -574
  1378. package/bin/skills/torch_geometric/references/layers_reference.md +0 -485
  1379. package/bin/skills/torch_geometric/references/transforms_reference.md +0 -679
  1380. package/bin/skills/torch_geometric/scripts/benchmark_model.py +0 -309
  1381. package/bin/skills/torch_geometric/scripts/create_gnn_template.py +0 -529
  1382. package/bin/skills/torch_geometric/scripts/visualize_graph.py +0 -313
  1383. package/bin/skills/torchdrug/SKILL.md +0 -450
  1384. package/bin/skills/torchdrug/references/core_concepts.md +0 -565
  1385. package/bin/skills/torchdrug/references/datasets.md +0 -380
  1386. package/bin/skills/torchdrug/references/knowledge_graphs.md +0 -320
  1387. package/bin/skills/torchdrug/references/models_architectures.md +0 -541
  1388. package/bin/skills/torchdrug/references/molecular_generation.md +0 -352
  1389. package/bin/skills/torchdrug/references/molecular_property_prediction.md +0 -169
  1390. package/bin/skills/torchdrug/references/protein_modeling.md +0 -272
  1391. package/bin/skills/torchdrug/references/retrosynthesis.md +0 -436
  1392. package/bin/skills/torchforge/SKILL.md +0 -433
  1393. package/bin/skills/torchforge/references/api-reference.md +0 -327
  1394. package/bin/skills/torchforge/references/troubleshooting.md +0 -409
  1395. package/bin/skills/torchtitan/SKILL.md +0 -358
  1396. package/bin/skills/torchtitan/references/checkpoint.md +0 -181
  1397. package/bin/skills/torchtitan/references/custom-models.md +0 -258
  1398. package/bin/skills/torchtitan/references/float8.md +0 -133
  1399. package/bin/skills/torchtitan/references/fsdp.md +0 -126
  1400. package/bin/skills/training-data-pipeline/SKILL.md +0 -427
  1401. package/bin/skills/training-data-pipeline/references/data-quality.md +0 -136
  1402. package/bin/skills/training-data-pipeline/references/frontier-distillation.md +0 -129
  1403. package/bin/skills/training-data-pipeline/references/production-data-formatting.md +0 -126
  1404. package/bin/skills/transformer-lens/SKILL.md +0 -346
  1405. package/bin/skills/transformer-lens/references/README.md +0 -54
  1406. package/bin/skills/transformer-lens/references/api.md +0 -362
  1407. package/bin/skills/transformer-lens/references/tutorials.md +0 -339
  1408. package/bin/skills/transformers/SKILL.md +0 -164
  1409. package/bin/skills/transformers/references/generation.md +0 -467
  1410. package/bin/skills/transformers/references/models.md +0 -361
  1411. package/bin/skills/transformers/references/pipelines.md +0 -335
  1412. package/bin/skills/transformers/references/tokenizers.md +0 -447
  1413. package/bin/skills/transformers/references/training.md +0 -500
  1414. package/bin/skills/treatment-plans/README.md +0 -488
  1415. package/bin/skills/treatment-plans/SKILL.md +0 -1579
  1416. package/bin/skills/treatment-plans/assets/STYLING_QUICK_REFERENCE.md +0 -185
  1417. package/bin/skills/treatment-plans/assets/chronic_disease_management_plan.tex +0 -665
  1418. package/bin/skills/treatment-plans/assets/general_medical_treatment_plan.tex +0 -547
  1419. package/bin/skills/treatment-plans/assets/medical_treatment_plan.sty +0 -222
  1420. package/bin/skills/treatment-plans/assets/mental_health_treatment_plan.tex +0 -774
  1421. package/bin/skills/treatment-plans/assets/one_page_treatment_plan.tex +0 -193
  1422. package/bin/skills/treatment-plans/assets/pain_management_plan.tex +0 -799
  1423. package/bin/skills/treatment-plans/assets/perioperative_care_plan.tex +0 -753
  1424. package/bin/skills/treatment-plans/assets/quality_checklist.md +0 -471
  1425. package/bin/skills/treatment-plans/assets/rehabilitation_treatment_plan.tex +0 -756
  1426. package/bin/skills/treatment-plans/references/goal_setting_frameworks.md +0 -411
  1427. package/bin/skills/treatment-plans/references/intervention_guidelines.md +0 -507
  1428. package/bin/skills/treatment-plans/references/regulatory_compliance.md +0 -476
  1429. package/bin/skills/treatment-plans/references/specialty_specific_guidelines.md +0 -655
  1430. package/bin/skills/treatment-plans/references/treatment_plan_standards.md +0 -485
  1431. package/bin/skills/treatment-plans/scripts/check_completeness.py +0 -322
  1432. package/bin/skills/treatment-plans/scripts/generate_template.py +0 -233
  1433. package/bin/skills/treatment-plans/scripts/timeline_generator.py +0 -385
  1434. package/bin/skills/treatment-plans/scripts/validate_treatment_plan.py +0 -369
  1435. package/bin/skills/trl-fine-tuning/SKILL.md +0 -455
  1436. package/bin/skills/trl-fine-tuning/references/dpo-variants.md +0 -227
  1437. package/bin/skills/trl-fine-tuning/references/online-rl.md +0 -82
  1438. package/bin/skills/trl-fine-tuning/references/reward-modeling.md +0 -122
  1439. package/bin/skills/trl-fine-tuning/references/sft-training.md +0 -168
  1440. package/bin/skills/umap-learn/SKILL.md +0 -479
  1441. package/bin/skills/umap-learn/references/api_reference.md +0 -532
  1442. package/bin/skills/uniprot-database/SKILL.md +0 -195
  1443. package/bin/skills/uniprot-database/references/api_examples.md +0 -413
  1444. package/bin/skills/uniprot-database/references/api_fields.md +0 -275
  1445. package/bin/skills/uniprot-database/references/id_mapping_databases.md +0 -285
  1446. package/bin/skills/uniprot-database/references/query_syntax.md +0 -256
  1447. package/bin/skills/uniprot-database/scripts/uniprot_client.py +0 -341
  1448. package/bin/skills/unsloth/SKILL.md +0 -635
  1449. package/bin/skills/unsloth/docs/advanced-rl.md +0 -222
  1450. package/bin/skills/unsloth/docs/chat-templates.md +0 -141
  1451. package/bin/skills/unsloth/docs/datasets.md +0 -489
  1452. package/bin/skills/unsloth/docs/docker-extended.md +0 -99
  1453. package/bin/skills/unsloth/docs/dynamic-ggufs-2.0.md +0 -116
  1454. package/bin/skills/unsloth/docs/dynamic-ggufs-aider.md +0 -118
  1455. package/bin/skills/unsloth/docs/faq.md +0 -91
  1456. package/bin/skills/unsloth/docs/fp16-vs-bf16.md +0 -61
  1457. package/bin/skills/unsloth/docs/fp8-rl.md +0 -224
  1458. package/bin/skills/unsloth/docs/glm-4.7-flash.md +0 -997
  1459. package/bin/skills/unsloth/docs/inference-deployment-overview.md +0 -17
  1460. package/bin/skills/unsloth/docs/inference.md +0 -27
  1461. package/bin/skills/unsloth/docs/installation-docker.md +0 -155
  1462. package/bin/skills/unsloth/docs/installation-pip.md +0 -148
  1463. package/bin/skills/unsloth/docs/kernels-packing.md +0 -190
  1464. package/bin/skills/unsloth/docs/kimi-k2.5.md +0 -634
  1465. package/bin/skills/unsloth/docs/lm-studio.md +0 -235
  1466. package/bin/skills/unsloth/docs/lora-hot-swapping.md +0 -75
  1467. package/bin/skills/unsloth/docs/lora-hyperparameters.md +0 -363
  1468. package/bin/skills/unsloth/docs/memory-efficient-rl.md +0 -267
  1469. package/bin/skills/unsloth/docs/model-selection.md +0 -70
  1470. package/bin/skills/unsloth/docs/models.md +0 -532
  1471. package/bin/skills/unsloth/docs/multi-gpu-ddp.md +0 -90
  1472. package/bin/skills/unsloth/docs/notebooks.md +0 -223
  1473. package/bin/skills/unsloth/docs/overview.md +0 -110
  1474. package/bin/skills/unsloth/docs/qwen3-coder-next-extended.md +0 -900
  1475. package/bin/skills/unsloth/docs/qwen3-coder-next.md +0 -900
  1476. package/bin/skills/unsloth/docs/requirements.md +0 -45
  1477. package/bin/skills/unsloth/docs/reward-hacking.md +0 -25
  1478. package/bin/skills/unsloth/docs/saving-to-gguf.md +0 -138
  1479. package/bin/skills/unsloth/docs/saving-to-ollama.md +0 -46
  1480. package/bin/skills/unsloth/docs/sglang-guide.md +0 -278
  1481. package/bin/skills/unsloth/docs/speculative-decoding.md +0 -70
  1482. package/bin/skills/unsloth/docs/tool-calling.md +0 -334
  1483. package/bin/skills/unsloth/docs/troubleshooting-faq.md +0 -204
  1484. package/bin/skills/unsloth/docs/troubleshooting-inference.md +0 -26
  1485. package/bin/skills/unsloth/docs/tts-fine-tuning.md +0 -149
  1486. package/bin/skills/unsloth/docs/tutorial-grpo.md +0 -273
  1487. package/bin/skills/unsloth/docs/tutorial-llama3-ollama.md +0 -356
  1488. package/bin/skills/unsloth/docs/vision-fine-tuning.md +0 -135
  1489. package/bin/skills/unsloth/docs/vision-rl.md +0 -170
  1490. package/bin/skills/unsloth/docs/vllm-engine-arguments.md +0 -43
  1491. package/bin/skills/unsloth/docs/vllm-guide.md +0 -98
  1492. package/bin/skills/uspto-database/SKILL.md +0 -607
  1493. package/bin/skills/uspto-database/references/additional_apis.md +0 -394
  1494. package/bin/skills/uspto-database/references/patentsearch_api.md +0 -266
  1495. package/bin/skills/uspto-database/references/peds_api.md +0 -212
  1496. package/bin/skills/uspto-database/references/trademark_api.md +0 -358
  1497. package/bin/skills/uspto-database/scripts/patent_search.py +0 -290
  1498. package/bin/skills/uspto-database/scripts/peds_client.py +0 -285
  1499. package/bin/skills/uspto-database/scripts/trademark_client.py +0 -311
  1500. package/bin/skills/vaex/SKILL.md +0 -182
  1501. package/bin/skills/vaex/references/core_dataframes.md +0 -367
  1502. package/bin/skills/vaex/references/data_processing.md +0 -555
  1503. package/bin/skills/vaex/references/io_operations.md +0 -703
  1504. package/bin/skills/vaex/references/machine_learning.md +0 -728
  1505. package/bin/skills/vaex/references/performance.md +0 -571
  1506. package/bin/skills/vaex/references/visualization.md +0 -613
  1507. package/bin/skills/venue-templates/SKILL.md +0 -686
  1508. package/bin/skills/venue-templates/assets/examples/cell_summary_example.md +0 -247
  1509. package/bin/skills/venue-templates/assets/examples/medical_structured_abstract.md +0 -313
  1510. package/bin/skills/venue-templates/assets/examples/nature_abstract_examples.md +0 -213
  1511. package/bin/skills/venue-templates/assets/examples/neurips_introduction_example.md +0 -245
  1512. package/bin/skills/venue-templates/assets/grants/nih_specific_aims.tex +0 -235
  1513. package/bin/skills/venue-templates/assets/grants/nsf_proposal_template.tex +0 -375
  1514. package/bin/skills/venue-templates/assets/journals/nature_article.tex +0 -171
  1515. package/bin/skills/venue-templates/assets/journals/neurips_article.tex +0 -283
  1516. package/bin/skills/venue-templates/assets/journals/plos_one.tex +0 -317
  1517. package/bin/skills/venue-templates/assets/posters/beamerposter_academic.tex +0 -311
  1518. package/bin/skills/venue-templates/references/cell_press_style.md +0 -483
  1519. package/bin/skills/venue-templates/references/conferences_formatting.md +0 -564
  1520. package/bin/skills/venue-templates/references/cs_conference_style.md +0 -463
  1521. package/bin/skills/venue-templates/references/grants_requirements.md +0 -787
  1522. package/bin/skills/venue-templates/references/journals_formatting.md +0 -486
  1523. package/bin/skills/venue-templates/references/medical_journal_styles.md +0 -535
  1524. package/bin/skills/venue-templates/references/ml_conference_style.md +0 -556
  1525. package/bin/skills/venue-templates/references/nature_science_style.md +0 -405
  1526. package/bin/skills/venue-templates/references/posters_guidelines.md +0 -628
  1527. package/bin/skills/venue-templates/references/reviewer_expectations.md +0 -417
  1528. package/bin/skills/venue-templates/references/venue_writing_styles.md +0 -321
  1529. package/bin/skills/venue-templates/scripts/customize_template.py +0 -195
  1530. package/bin/skills/venue-templates/scripts/query_template.py +0 -266
  1531. package/bin/skills/venue-templates/scripts/validate_format.py +0 -250
  1532. package/bin/skills/verl/SKILL.md +0 -391
  1533. package/bin/skills/verl/references/api-reference.md +0 -301
  1534. package/bin/skills/verl/references/troubleshooting.md +0 -391
  1535. package/bin/skills/vllm/SKILL.md +0 -364
  1536. package/bin/skills/vllm/references/optimization.md +0 -226
  1537. package/bin/skills/vllm/references/quantization.md +0 -284
  1538. package/bin/skills/vllm/references/server-deployment.md +0 -255
  1539. package/bin/skills/vllm/references/troubleshooting.md +0 -447
  1540. package/bin/skills/weights-and-biases/SKILL.md +0 -590
  1541. package/bin/skills/weights-and-biases/references/artifacts.md +0 -584
  1542. package/bin/skills/weights-and-biases/references/integrations.md +0 -700
  1543. package/bin/skills/weights-and-biases/references/sweeps.md +0 -847
  1544. package/bin/skills/whisper/SKILL.md +0 -317
  1545. package/bin/skills/whisper/references/languages.md +0 -189
  1546. package/bin/skills/zarr-python/SKILL.md +0 -779
  1547. package/bin/skills/zarr-python/references/api_reference.md +0 -515
  1548. package/bin/skills/zinc-database/SKILL.md +0 -404
  1549. package/bin/skills/zinc-database/references/api_reference.md +0 -692
@@ -1,126 +0,0 @@
1
- ---
2
- name: pytorch-fsdp
3
- description: Expert guidance for Fully Sharded Data Parallel training with PyTorch FSDP - parameter sharding, mixed precision, CPU offloading, FSDP2
4
- version: 1.0.0
5
- author: Synthetic Sciences
6
- license: MIT
7
- tags: [Distributed Training, PyTorch, FSDP, Data Parallel, Sharding, Mixed Precision, CPU Offloading, FSDP2, Large-Scale Training]
8
- dependencies: [torch>=2.0, transformers]
9
- ---
10
-
11
- # Pytorch-Fsdp Skill
12
-
13
- Comprehensive assistance with pytorch-fsdp development, generated from official documentation.
14
-
15
- ## When to Use This Skill
16
-
17
- This skill should be triggered when:
18
- - Working with pytorch-fsdp
19
- - Asking about pytorch-fsdp features or APIs
20
- - Implementing pytorch-fsdp solutions
21
- - Debugging pytorch-fsdp code
22
- - Learning pytorch-fsdp best practices
23
-
24
- ## Quick Reference
25
-
26
- ### Common Patterns
27
-
28
- **Pattern 1:** Generic Join Context Manager# Created On: Jun 06, 2025 | Last Updated On: Jun 06, 2025 The generic join context manager facilitates distributed training on uneven inputs. This page outlines the API of the relevant classes: Join, Joinable, and JoinHook. For a tutorial, see Distributed Training with Uneven Inputs Using the Join Context Manager. class torch.distributed.algorithms.Join(joinables, enable=True, throw_on_early_termination=False, **kwargs)[source]# This class defines the generic join context manager, which allows custom hooks to be called after a process joins. These hooks should shadow the collective communications of non-joined processes to prevent hanging and erroring and to ensure algorithmic correctness. Refer to JoinHook for details about the hook definition. Warning The context manager requires each participating Joinable to call the method notify_join_context() before its own per- iteration collective communications to ensure correctness. Warning The context manager requires that all process_group attributes in the JoinHook objects are the same. If there are multiple JoinHook objects, then the device of the first is used. The process group and device information is used for checking for non- joined processes and for notifying processes to throw an exception if throw_on_early_termination is enabled, both of which using an all- reduce. Parameters joinables (List[Joinable]) – a list of the participating Joinable s; their hooks are iterated over in the given order. enable (bool) – a flag enabling uneven input detection; setting to False disables the context manager’s functionality and should only be set when the user knows the inputs will not be uneven (default: True). throw_on_early_termination (bool) – a flag controlling whether to throw an exception upon detecting uneven inputs (default: False). Example: >>> import os >>> import torch >>> import torch.distributed as dist >>> import torch.multiprocessing as mp >>> import torch.nn.parallel.DistributedDataParallel as DDP >>> import torch.distributed.optim.ZeroRedundancyOptimizer as ZeRO >>> from torch.distributed.algorithms.join import Join >>> >>> # On each spawned worker >>> def worker(rank): >>> dist.init_process_group("nccl", rank=rank, world_size=2) >>> model = DDP(torch.nn.Linear(1, 1).to(rank), device_ids=[rank]) >>> optim = ZeRO(model.parameters(), torch.optim.Adam, lr=0.01) >>> # Rank 1 gets one more input than rank 0 >>> inputs = [torch.tensor([1.]).to(rank) for _ in range(10 + rank)] >>> with Join([model, optim]): >>> for input in inputs: >>> loss = model(input).sum() >>> loss.backward() >>> optim.step() >>> # All ranks reach here without hanging/erroring static notify_join_context(joinable)[source]# Notifies the join context manager that the calling process has not yet joined. Then, if throw_on_early_termination=True, checks if uneven inputs have been detected (i.e. if one process has already joined) and throws an exception if so. This method should be called from a Joinable object before its per-iteration collective communications. For example, this should be called at the beginning of the forward pass in DistributedDataParallel. Only the first Joinable object passed into the context manager performs the collective communications in this method, and for the others, this method is vacuous. Parameters joinable (Joinable) – the Joinable object calling this method. Returns An async work handle for the all-reduce meant to notify the context manager that the process has not yet joined if joinable is the first one passed into the context manager; None otherwise. class torch.distributed.algorithms.Joinable[source]# This defines an abstract base class for joinable classes. A joinable class (inheriting from Joinable) should implement join_hook(), which returns a JoinHook instance, in addition to join_device() and join_process_group() that return device and process group information, respectively. abstract property join_device: device# Return the device from which to perform collective communications needed by the join context manager. abstract join_hook(**kwargs)[source]# Return a JoinHook instance for the given Joinable. Parameters kwargs (dict) – a dict containing any keyword arguments to modify the behavior of the join hook at run time; all Joinable instances sharing the same join context manager are forwarded the same value for kwargs. Return type JoinHook abstract property join_process_group: Any# Returns the process group for the collective communications needed by the join context manager itself. class torch.distributed.algorithms.JoinHook[source]# This defines a join hook, which provides two entry points in the join context manager. Entry points : a main hook, which is called repeatedly while there exists a non-joined process, and a post-hook, which is called once all processes have joined. To implement a join hook for the generic join context manager, define a class that inherits from JoinHook and override main_hook() and post_hook() as appropriate. main_hook()[source]# Call this hook while there exists a non-joined process to shadow collective communications in a training iteration. Training iteration i.e., in one forward pass, backward pass, and optimizer step. post_hook(is_last_joiner)[source]# Call hook after all processes have joined. It is passed an additional bool argument is_last_joiner, which indicates if the rank is one of the last to join. Parameters is_last_joiner (bool) – True if the rank is one of the last to join; False otherwise.
29
-
30
- ```
31
- Join
32
- ```
33
-
34
- **Pattern 2:** Distributed communication package - torch.distributed# Created On: Jul 12, 2017 | Last Updated On: Sep 04, 2025 Note Please refer to PyTorch Distributed Overview for a brief introduction to all features related to distributed training. Backends# torch.distributed supports four built-in backends, each with different capabilities. The table below shows which functions are available for use with a CPU or GPU for each backend. For NCCL, GPU refers to CUDA GPU while for XCCL to XPU GPU. MPI supports CUDA only if the implementation used to build PyTorch supports it. Backend gloo mpi nccl xccl Device CPU GPU CPU GPU CPU GPU CPU GPU send ✓ ✘ ✓ ? ✘ ✓ ✘ ✓ recv ✓ ✘ ✓ ? ✘ ✓ ✘ ✓ broadcast ✓ ✓ ✓ ? ✘ ✓ ✘ ✓ all_reduce ✓ ✓ ✓ ? ✘ ✓ ✘ ✓ reduce ✓ ✓ ✓ ? ✘ ✓ ✘ ✓ all_gather ✓ ✓ ✓ ? ✘ ✓ ✘ ✓ gather ✓ ✓ ✓ ? ✘ ✓ ✘ ✓ scatter ✓ ✓ ✓ ? ✘ ✓ ✘ ✓ reduce_scatter ✓ ✓ ✘ ✘ ✘ ✓ ✘ ✓ all_to_all ✓ ✓ ✓ ? ✘ ✓ ✘ ✓ barrier ✓ ✘ ✓ ? ✘ ✓ ✘ ✓ Backends that come with PyTorch# PyTorch distributed package supports Linux (stable), MacOS (stable), and Windows (prototype). By default for Linux, the Gloo and NCCL backends are built and included in PyTorch distributed (NCCL only when building with CUDA). MPI is an optional backend that can only be included if you build PyTorch from source. (e.g. building PyTorch on a host that has MPI installed.) Note As of PyTorch v1.8, Windows supports all collective communications backend but NCCL, If the init_method argument of init_process_group() points to a file it must adhere to the following schema: Local file system, init_method="file:///d:/tmp/some_file" Shared file system, init_method="file://////{machine_name}/{share_folder_name}/some_file" Same as on Linux platform, you can enable TcpStore by setting environment variables, MASTER_ADDR and MASTER_PORT. Which backend to use?# In the past, we were often asked: “which backend should I use?”. Rule of thumb Use the NCCL backend for distributed training with CUDA GPU. Use the XCCL backend for distributed training with XPU GPU. Use the Gloo backend for distributed training with CPU. GPU hosts with InfiniBand interconnect Use NCCL, since it’s the only backend that currently supports InfiniBand and GPUDirect. GPU hosts with Ethernet interconnect Use NCCL, since it currently provides the best distributed GPU training performance, especially for multiprocess single-node or multi-node distributed training. If you encounter any problem with NCCL, use Gloo as the fallback option. (Note that Gloo currently runs slower than NCCL for GPUs.) CPU hosts with InfiniBand interconnect If your InfiniBand has enabled IP over IB, use Gloo, otherwise, use MPI instead. We are planning on adding InfiniBand support for Gloo in the upcoming releases. CPU hosts with Ethernet interconnect Use Gloo, unless you have specific reasons to use MPI. Common environment variables# Choosing the network interface to use# By default, both the NCCL and Gloo backends will try to find the right network interface to use. If the automatically detected interface is not correct, you can override it using the following environment variables (applicable to the respective backend): NCCL_SOCKET_IFNAME, for example export NCCL_SOCKET_IFNAME=eth0 GLOO_SOCKET_IFNAME, for example export GLOO_SOCKET_IFNAME=eth0 If you’re using the Gloo backend, you can specify multiple interfaces by separating them by a comma, like this: export GLOO_SOCKET_IFNAME=eth0,eth1,eth2,eth3. The backend will dispatch operations in a round-robin fashion across these interfaces. It is imperative that all processes specify the same number of interfaces in this variable. Other NCCL environment variables# Debugging - in case of NCCL failure, you can set NCCL_DEBUG=INFO to print an explicit warning message as well as basic NCCL initialization information. You may also use NCCL_DEBUG_SUBSYS to get more details about a specific aspect of NCCL. For example, NCCL_DEBUG_SUBSYS=COLL would print logs of collective calls, which may be helpful when debugging hangs, especially those caused by collective type or message size mismatch. In case of topology detection failure, it would be helpful to set NCCL_DEBUG_SUBSYS=GRAPH to inspect the detailed detection result and save as reference if further help from NCCL team is needed. Performance tuning - NCCL performs automatic tuning based on its topology detection to save users’ tuning effort. On some socket-based systems, users may still try tuning NCCL_SOCKET_NTHREADS and NCCL_NSOCKS_PERTHREAD to increase socket network bandwidth. These two environment variables have been pre-tuned by NCCL for some cloud providers, such as AWS or GCP. For a full list of NCCL environment variables, please refer to NVIDIA NCCL’s official documentation You can tune NCCL communicators even further using torch.distributed.ProcessGroupNCCL.NCCLConfig and torch.distributed.ProcessGroupNCCL.Options. Learn more about them using help (e.g. help(torch.distributed.ProcessGroupNCCL.NCCLConfig)) in the interpreter. Basics# The torch.distributed package provides PyTorch support and communication primitives for multiprocess parallelism across several computation nodes running on one or more machines. The class torch.nn.parallel.DistributedDataParallel() builds on this functionality to provide synchronous distributed training as a wrapper around any PyTorch model. This differs from the kinds of parallelism provided by Multiprocessing package - torch.multiprocessing and torch.nn.DataParallel() in that it supports multiple network-connected machines and in that the user must explicitly launch a separate copy of the main training script for each process. In the single-machine synchronous case, torch.distributed or the torch.nn.parallel.DistributedDataParallel() wrapper may still have advantages over other approaches to data-parallelism, including torch.nn.DataParallel(): Each process maintains its own optimizer and performs a complete optimization step with each iteration. While this may appear redundant, since the gradients have already been gathered together and averaged across processes and are thus the same for every process, this means that no parameter broadcast step is needed, reducing time spent transferring tensors between nodes. Each process contains an independent Python interpreter, eliminating the extra interpreter overhead and “GIL-thrashing” that comes from driving several execution threads, model replicas, or GPUs from a single Python process. This is especially important for models that make heavy use of the Python runtime, including models with recurrent layers or many small components. Initialization# The package needs to be initialized using the torch.distributed.init_process_group() or torch.distributed.device_mesh.init_device_mesh() function before calling any other methods. Both block until all processes have joined. Warning Initialization is not thread-safe. Process group creation should be performed from a single thread, to prevent inconsistent ‘UUID’ assignment across ranks, and to prevent races during initialization that can lead to hangs. torch.distributed.is_available()[source]# Return True if the distributed package is available. Otherwise, torch.distributed does not expose any other APIs. Currently, torch.distributed is available on Linux, MacOS and Windows. Set USE_DISTRIBUTED=1 to enable it when building PyTorch from source. Currently, the default value is USE_DISTRIBUTED=1 for Linux and Windows, USE_DISTRIBUTED=0 for MacOS. Return type bool torch.distributed.init_process_group(backend=None, init_method=None, timeout=None, world_size=-1, rank=-1, store=None, group_name='', pg_options=None, device_id=None)[source]# Initialize the default distributed process group. This will also initialize the distributed package. There are 2 main ways to initialize a process group: Specify store, rank, and world_size explicitly. Specify init_method (a URL string) which indicates where/how to discover peers. Optionally specify rank and world_size, or encode all required parameters in the URL and omit them. If neither is specified, init_method is assumed to be “env://”. Parameters backend (str or Backend, optional) – The backend to use. Depending on build-time configurations, valid values include mpi, gloo, nccl, ucc, xccl or one that is registered by a third-party plugin. Since 2.6, if backend is not provided, c10d will use a backend registered for the device type indicated by the device_id kwarg (if provided). The known default registrations today are: nccl for cuda, gloo for cpu, xccl for xpu. If neither backend nor device_id is provided, c10d will detect the accelerator on the run-time machine and use a backend registered for that detected accelerator (or cpu). This field can be given as a lowercase string (e.g., "gloo"), which can also be accessed via Backend attributes (e.g., Backend.GLOO). If using multiple processes per machine with nccl backend, each process must have exclusive access to every GPU it uses, as sharing GPUs between processes can result in deadlock or NCCL invalid usage. ucc backend is experimental. Default backend for the device can be queried with get_default_backend_for_device(). init_method (str, optional) – URL specifying how to initialize the process group. Default is “env://” if no init_method or store is specified. Mutually exclusive with store. world_size (int, optional) – Number of processes participating in the job. Required if store is specified. rank (int, optional) – Rank of the current process (it should be a number between 0 and world_size-1). Required if store is specified. store (Store, optional) – Key/value store accessible to all workers, used to exchange connection/address information. Mutually exclusive with init_method. timeout (timedelta, optional) – Timeout for operations executed against the process group. Default value is 10 minutes for NCCL and 30 minutes for other backends. This is the duration after which collectives will be aborted asynchronously and the process will crash. This is done since CUDA execution is async and it is no longer safe to continue executing user code since failed async NCCL operations might result in subsequent CUDA operations running on corrupted data. When TORCH_NCCL_BLOCKING_WAIT is set, the process will block and wait for this timeout. group_name (str, optional, deprecated) – Group name. This argument is ignored pg_options (ProcessGroupOptions, optional) – process group options specifying what additional options need to be passed in during the construction of specific process groups. As of now, the only options we support is ProcessGroupNCCL.Options for the nccl backend, is_high_priority_stream can be specified so that the nccl backend can pick up high priority cuda streams when there’re compute kernels waiting. For other available options to config nccl, See https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/api/types.html#ncclconfig-t device_id (torch.device | int, optional) – a single, specific device this process will work on, allowing for backend-specific optimizations. Currently this has two effects, only under NCCL: the communicator is immediately formed (calling ncclCommInit* immediately rather than the normal lazy call) and sub-groups will use ncclCommSplit when possible to avoid unnecessary overhead of group creation. If you want to know NCCL initialization error early, you can also use this field. If an int is provided, the API assumes that the accelerator type at compile time will be used. Note To enable backend == Backend.MPI, PyTorch needs to be built from source on a system that supports MPI. Note Support for multiple backends is experimental. Currently when no backend is specified, both gloo and nccl backends will be created. The gloo backend will be used for collectives with CPU tensors and the nccl backend will be used for collectives with CUDA tensors. A custom backend can be specified by passing in a string with format “<device_type>:<backend_name>,<device_type>:<backend_name>”, e.g. “cpu:gloo,cuda:custom_backend”. torch.distributed.device_mesh.init_device_mesh(device_type, mesh_shape, *, mesh_dim_names=None, backend_override=None)[source]# Initializes a DeviceMesh based on device_type, mesh_shape, and mesh_dim_names parameters. This creates a DeviceMesh with an n-dimensional array layout, where n is the length of mesh_shape. If mesh_dim_names is provided, each dimension is labeled as mesh_dim_names[i]. Note init_device_mesh follows SPMD programming model, meaning the same PyTorch Python program runs on all processes/ranks in the cluster. Ensure mesh_shape (the dimensions of the nD array describing device layout) is identical across all ranks. Inconsistent mesh_shape may lead to hanging. Note If no process group is found, init_device_mesh will initialize distributed process group/groups required for distributed communications behind the scene. Parameters device_type (str) – The device type of the mesh. Currently supports: “cpu”, “cuda/cuda-like”, “xpu”. Passing in a device type with a GPU index, such as “cuda:0”, is not allowed. mesh_shape (Tuple[int]) – A tuple defining the dimensions of the multi-dimensional array describing the layout of devices. mesh_dim_names (Tuple[str], optional) – A tuple of mesh dimension names to assign to each dimension of the multi-dimensional array describing the layout of devices. Its length must match the length of mesh_shape. Each string in mesh_dim_names must be unique. backend_override (Dict[int | str, tuple[str, Options] | str | Options], optional) – Overrides for some or all of the ProcessGroups that will be created for each mesh dimension. Each key can be either the index of a dimension or its name (if mesh_dim_names is provided). Each value can be a tuple containing the name of the backend and its options, or just one of these two components (in which case the other will be set to its default value). Returns A DeviceMesh object representing the device layout. Return type DeviceMesh Example: >>> from torch.distributed.device_mesh import init_device_mesh >>> >>> mesh_1d = init_device_mesh("cuda", mesh_shape=(8,)) >>> mesh_2d = init_device_mesh("cuda", mesh_shape=(2, 8), mesh_dim_names=("dp", "tp")) torch.distributed.is_initialized()[source]# Check if the default process group has been initialized. Return type bool torch.distributed.is_mpi_available()[source]# Check if the MPI backend is available. Return type bool torch.distributed.is_nccl_available()[source]# Check if the NCCL backend is available. Return type bool torch.distributed.is_gloo_available()[source]# Check if the Gloo backend is available. Return type bool torch.distributed.distributed_c10d.is_xccl_available()[source]# Check if the XCCL backend is available. Return type bool torch.distributed.is_torchelastic_launched()[source]# Check whether this process was launched with torch.distributed.elastic (aka torchelastic). The existence of TORCHELASTIC_RUN_ID environment variable is used as a proxy to determine whether the current process was launched with torchelastic. This is a reasonable proxy since TORCHELASTIC_RUN_ID maps to the rendezvous id which is always a non-null value indicating the job id for peer discovery purposes.. Return type bool torch.distributed.get_default_backend_for_device(device)[source]# Return the default backend for the given device. Parameters device (Union[str, torch.device]) – The device to get the default backend for. Returns The default backend for the given device as a lower case string. Return type str Currently three initialization methods are supported: TCP initialization# There are two ways to initialize using TCP, both requiring a network address reachable from all processes and a desired world_size. The first way requires specifying an address that belongs to the rank 0 process. This initialization method requires that all processes have manually specified ranks. Note that multicast address is not supported anymore in the latest distributed package. group_name is deprecated as well. import torch.distributed as dist # Use address of one of the machines dist.init_process_group(backend, init_method='tcp://10.1.1.20:23456', rank=args.rank, world_size=4) Shared file-system initialization# Another initialization method makes use of a file system that is shared and visible from all machines in a group, along with a desired world_size. The URL should start with file:// and contain a path to a non-existent file (in an existing directory) on a shared file system. File-system initialization will automatically create that file if it doesn’t exist, but will not delete the file. Therefore, it is your responsibility to make sure that the file is cleaned up before the next init_process_group() call on the same file path/name. Note that automatic rank assignment is not supported anymore in the latest distributed package and group_name is deprecated as well. Warning This method assumes that the file system supports locking using fcntl - most local systems and NFS support it. Warning This method will always create the file and try its best to clean up and remove the file at the end of the program. In other words, each initialization with the file init method will need a brand new empty file in order for the initialization to succeed. If the same file used by the previous initialization (which happens not to get cleaned up) is used again, this is unexpected behavior and can often cause deadlocks and failures. Therefore, even though this method will try its best to clean up the file, if the auto-delete happens to be unsuccessful, it is your responsibility to ensure that the file is removed at the end of the training to prevent the same file to be reused again during the next time. This is especially important if you plan to call init_process_group() multiple times on the same file name. In other words, if the file is not removed/cleaned up and you call init_process_group() again on that file, failures are expected. The rule of thumb here is that, make sure that the file is non-existent or empty every time init_process_group() is called. import torch.distributed as dist # rank should always be specified dist.init_process_group(backend, init_method='file:///mnt/nfs/sharedfile', world_size=4, rank=args.rank) Environment variable initialization# This method will read the configuration from environment variables, allowing one to fully customize how the information is obtained. The variables to be set are: MASTER_PORT - required; has to be a free port on machine with rank 0 MASTER_ADDR - required (except for rank 0); address of rank 0 node WORLD_SIZE - required; can be set either here, or in a call to init function RANK - required; can be set either here, or in a call to init function The machine with rank 0 will be used to set up all connections. This is the default method, meaning that init_method does not have to be specified (or can be env://). Improving initialization time# TORCH_GLOO_LAZY_INIT - establishes connections on demand rather than using a full mesh which can greatly improve initialization time for non all2all operations. Post-Initialization# Once torch.distributed.init_process_group() was run, the following functions can be used. To check whether the process group has already been initialized use torch.distributed.is_initialized(). class torch.distributed.Backend(name)[source]# An enum-like class for backends. Available backends: GLOO, NCCL, UCC, MPI, XCCL, and other registered backends. The values of this class are lowercase strings, e.g., "gloo". They can be accessed as attributes, e.g., Backend.NCCL. This class can be directly called to parse the string, e.g., Backend(backend_str) will check if backend_str is valid, and return the parsed lowercase string if so. It also accepts uppercase strings, e.g., Backend("GLOO") returns "gloo". Note The entry Backend.UNDEFINED is present but only used as initial value of some fields. Users should neither use it directly nor assume its existence. classmethod register_backend(name, func, extended_api=False, devices=None)[source]# Register a new backend with the given name and instantiating function. This class method is used by 3rd party ProcessGroup extension to register new backends. Parameters name (str) – Backend name of the ProcessGroup extension. It should match the one in init_process_group(). func (function) – Function handler that instantiates the backend. The function should be implemented in the backend extension and takes four arguments, including store, rank, world_size, and timeout. extended_api (bool, optional) – Whether the backend supports extended argument structure. Default: False. If set to True, the backend will get an instance of c10d::DistributedBackendOptions, and a process group options object as defined by the backend implementation. device (str or list of str, optional) – device type this backend supports, e.g. “cpu”, “cuda”, etc. If None, assuming both “cpu” and “cuda” Note This support of 3rd party backend is experimental and subject to change. torch.distributed.get_backend(group=None)[source]# Return the backend of the given process group. Parameters group (ProcessGroup, optional) – The process group to work on. The default is the general main process group. If another specific group is specified, the calling process must be part of group. Returns The backend of the given process group as a lower case string. Return type Backend torch.distributed.get_rank(group=None)[source]# Return the rank of the current process in the provided group, default otherwise. Rank is a unique identifier assigned to each process within a distributed process group. They are always consecutive integers ranging from 0 to world_size. Parameters group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used. Returns The rank of the process group -1, if not part of the group Return type int torch.distributed.get_world_size(group=None)[source]# Return the number of processes in the current process group. Parameters group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used. Returns The world size of the process group -1, if not part of the group Return type int Shutdown# It is important to clean up resources on exit by calling destroy_process_group(). The simplest pattern to follow is to destroy every process group and backend by calling destroy_process_group() with the default value of None for the group argument, at a point in the training script where communications are no longer needed, usually near the end of main(). The call should be made once per trainer-process, not at the outer process-launcher level. if destroy_process_group() is not called by all ranks in a pg within the timeout duration, especially when there are multiple process-groups in the application e.g. for N-D parallelism, hangs on exit are possible. This is because the destructor for ProcessGroupNCCL calls ncclCommAbort, which must be called collectively, but the order of calling ProcessGroupNCCL’s destructor if called by python’s GC is not deterministic. Calling destroy_process_group() helps by ensuring ncclCommAbort is called in a consistent order across ranks, and avoids calling ncclCommAbort during ProcessGroupNCCL’s destructor. Reinitialization# destroy_process_group can also be used to destroy individual process groups. One use case could be fault tolerant training, where a process group may be destroyed and then a new one initialized during runtime. In this case, it’s critical to synchronize the trainer processes using some means other than torch.distributed primitives _after_ calling destroy and before subsequently initializing. This behavior is currently unsupported/untested, due to the difficulty of achieving this synchronization, and is considered a known issue. Please file a github issue or RFC if this is a use case that’s blocking you. Groups# By default collectives operate on the default group (also called the world) and require all processes to enter the distributed function call. However, some workloads can benefit from more fine-grained communication. This is where distributed groups come into play. new_group() function can be used to create new groups, with arbitrary subsets of all processes. It returns an opaque group handle that can be given as a group argument to all collectives (collectives are distributed functions to exchange information in certain well-known programming patterns). torch.distributed.new_group(ranks=None, timeout=None, backend=None, pg_options=None, use_local_synchronization=False, group_desc=None, device_id=None)[source]# Create a new distributed group. This function requires that all processes in the main group (i.e. all processes that are part of the distributed job) enter this function, even if they are not going to be members of the group. Additionally, groups should be created in the same order in all processes. Warning Safe concurrent usage: When using multiple process groups with the NCCL backend, the user must ensure a globally consistent execution order of collectives across ranks. If multiple threads within a process issue collectives, explicit synchronization is necessary to ensure consistent ordering. When using async variants of torch.distributed communication APIs, a work object is returned and the communication kernel is enqueued on a separate CUDA stream, allowing overlap of communication and computation. Once one or more async ops have been issued on one process group, they must be synchronized with other cuda streams by calling work.wait() before using another process group. See Using multiple NCCL communicators concurrently <https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/usage/communicators.html#using-multiple-nccl-communicators-concurrently> for more details. Parameters ranks (list[int]) – List of ranks of group members. If None, will be set to all ranks. Default is None. timeout (timedelta, optional) – see init_process_group for details and default value. backend (str or Backend, optional) – The backend to use. Depending on build-time configurations, valid values are gloo and nccl. By default uses the same backend as the global group. This field should be given as a lowercase string (e.g., "gloo"), which can also be accessed via Backend attributes (e.g., Backend.GLOO). If None is passed in, the backend corresponding to the default process group will be used. Default is None. pg_options (ProcessGroupOptions, optional) – process group options specifying what additional options need to be passed in during the construction of specific process groups. i.e. for the nccl backend, is_high_priority_stream can be specified so that process group can pick up high priority cuda streams. For other available options to config nccl, See https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/api/types.html#ncclconfig-tuse_local_synchronization (bool, optional): perform a group-local barrier at the end of the process group creation. This is different in that non-member ranks don’t need to call into API and don’t join the barrier. group_desc (str, optional) – a string to describe the process group. device_id (torch.device, optional) – a single, specific device to “bind” this process to, The new_group call will try to initialize a communication backend immediately for the device if this field is given. Returns A handle of distributed group that can be given to collective calls or GroupMember.NON_GROUP_MEMBER if the rank is not part of ranks. N.B. use_local_synchronization doesn’t work with MPI. N.B. While use_local_synchronization=True can be significantly faster with larger clusters and small process groups, care must be taken since it changes cluster behavior as non-member ranks don’t join the group barrier(). N.B. use_local_synchronization=True can lead to deadlocks when each rank creates multiple overlapping process groups. To avoid that, make sure all ranks follow the same global creation order. torch.distributed.get_group_rank(group, global_rank)[source]# Translate a global rank into a group rank. global_rank must be part of group otherwise this raises RuntimeError. Parameters group (ProcessGroup) – ProcessGroup to find the relative rank. global_rank (int) – Global rank to query. Returns Group rank of global_rank relative to group Return type int N.B. calling this function on the default process group returns identity torch.distributed.get_global_rank(group, group_rank)[source]# Translate a group rank into a global rank. group_rank must be part of group otherwise this raises RuntimeError. Parameters group (ProcessGroup) – ProcessGroup to find the global rank from. group_rank (int) – Group rank to query. Returns Global rank of group_rank relative to group Return type int N.B. calling this function on the default process group returns identity torch.distributed.get_process_group_ranks(group)[source]# Get all ranks associated with group. Parameters group (Optional[ProcessGroup]) – ProcessGroup to get all ranks from. If None, the default process group will be used. Returns List of global ranks ordered by group rank. Return type list[int] DeviceMesh# DeviceMesh is a higher level abstraction that manages process groups (or NCCL communicators). It allows user to easily create inter node and intra node process groups without worrying about how to set up the ranks correctly for different sub process groups, and it helps manage those distributed process group easily. init_device_mesh() function can be used to create new DeviceMesh, with a mesh shape describing the device topology. class torch.distributed.device_mesh.DeviceMesh(device_type, mesh, *, mesh_dim_names=None, backend_override=None, _init_backend=True)[source]# DeviceMesh represents a mesh of devices, where layout of devices could be represented as a n-d dimension array, and each value of the n-d dimensional array is the global id of the default process group ranks. DeviceMesh could be used to setup the N dimensional device connections across the cluster, and manage the ProcessGroups for N dimensional parallelisms. Communications could happen on each dimension of the DeviceMesh separately. DeviceMesh respects the device that user selects already (i.e. if user call torch.cuda.set_device before the DeviceMesh initialization), and will select/set the device for the current process if user does not set the device beforehand. Note that manual device selection should happen BEFORE the DeviceMesh initialization. DeviceMesh can also be used as a context manager when using together with DTensor APIs. Note DeviceMesh follows SPMD programming model, which means the same PyTorch Python program is running on all processes/ranks in the cluster. Therefore, users need to make sure the mesh array (which describes the layout of devices) should be identical across all ranks. Inconsistent mesh will lead to silent hang. Parameters device_type (str) – The device type of the mesh. Currently supports: “cpu”, “cuda/cuda-like”. mesh (ndarray) – A multi-dimensional array or an integer tensor describing the layout of devices, where the IDs are global IDs of the default process group. Returns A DeviceMesh object representing the device layout. Return type DeviceMesh The following program runs on each process/rank in an SPMD manner. In this example, we have 2 hosts with 4 GPUs each. A reduction over the first dimension of mesh will reduce across columns (0, 4), .. and (3, 7), a reduction over the second dimension of mesh reduces across rows (0, 1, 2, 3) and (4, 5, 6, 7). Example: >>> from torch.distributed.device_mesh import DeviceMesh >>> >>> # Initialize device mesh as (2, 4) to represent the topology >>> # of cross-host(dim 0), and within-host (dim 1). >>> mesh = DeviceMesh(device_type="cuda", mesh=[[0, 1, 2, 3],[4, 5, 6, 7]]) static from_group(group, device_type, mesh=None, *, mesh_dim_names=None)[source]# Constructs a DeviceMesh with device_type from an existing ProcessGroup or a list of existing ProcessGroup. The constructed device mesh has number of dimensions equal to the number of groups passed. For example, if a single process group is passed in, the resulted DeviceMesh is a 1D mesh. If a list of 2 process groups is passed in, the resulted DeviceMesh is a 2D mesh. If more than one group is passed, then the mesh and mesh_dim_names arguments are required. The order of the process groups passed in determines the topology of the mesh. For example, the first process group will be the 0th dimension of the DeviceMesh. The mesh tensor passed in must have the same number of dimensions as the number of process groups passed in, and the order of the dimensions in the mesh tensor must match the order in the process groups passed in. Parameters group (ProcessGroup or list[ProcessGroup]) – the existing ProcessGroup or a list of existing ProcessGroups. device_type (str) – The device type of the mesh. Currently supports: “cpu”, “cuda/cuda-like”. Passing in a device type with a GPU index, such as “cuda:0”, is not allowed. mesh (torch.Tensor or ArrayLike, optional) – A multi-dimensional array or an integer tensor describing the layout of devices, where the IDs are global IDs of the default process group. Default is None. mesh_dim_names (tuple[str], optional) – A tuple of mesh dimension names to assign to each dimension of the multi-dimensional array describing the layout of devices. Its length must match the length of mesh_shape. Each string in mesh_dim_names must be unique. Default is None. Returns A DeviceMesh object representing the device layout. Return type DeviceMesh get_all_groups()[source]# Returns a list of ProcessGroups for all mesh dimensions. Returns A list of ProcessGroup object. Return type list[torch.distributed.distributed_c10d.ProcessGroup] get_coordinate()[source]# Return the relative indices of this rank relative to all dimensions of the mesh. If this rank is not part of the mesh, return None. Return type Optional[list[int]] get_group(mesh_dim=None)[source]# Returns the single ProcessGroup specified by mesh_dim, or, if mesh_dim is not specified and the DeviceMesh is 1-dimensional, returns the only ProcessGroup in the mesh. Parameters mesh_dim (str/python:int, optional) – it can be the name of the mesh dimension or the index None. (of the mesh dimension. Default is) – Returns A ProcessGroup object. Return type ProcessGroup get_local_rank(mesh_dim=None)[source]# Returns the local rank of the given mesh_dim of the DeviceMesh. Parameters mesh_dim (str/python:int, optional) – it can be the name of the mesh dimension or the index None. (of the mesh dimension. Default is) – Returns An integer denotes the local rank. Return type int The following program runs on each process/rank in an SPMD manner. In this example, we have 2 hosts with 4 GPUs each. Calling mesh_2d.get_local_rank(mesh_dim=0) on rank 0, 1, 2, 3 would return 0. Calling mesh_2d.get_local_rank(mesh_dim=0) on rank 4, 5, 6, 7 would return 1. Calling mesh_2d.get_local_rank(mesh_dim=1) on rank 0, 4 would return 0. Calling mesh_2d.get_local_rank(mesh_dim=1) on rank 1, 5 would return 1. Calling mesh_2d.get_local_rank(mesh_dim=1) on rank 2, 6 would return 2. Calling mesh_2d.get_local_rank(mesh_dim=1) on rank 3, 7 would return 3. Example: >>> from torch.distributed.device_mesh import DeviceMesh >>> >>> # Initialize device mesh as (2, 4) to represent the topology >>> # of cross-host(dim 0), and within-host (dim 1). >>> mesh = DeviceMesh(device_type="cuda", mesh=[[0, 1, 2, 3],[4, 5, 6, 7]]) get_rank()[source]# Returns the current global rank. Return type int Point-to-point communication# torch.distributed.send(tensor, dst=None, group=None, tag=0, group_dst=None)[source]# Send a tensor synchronously. Warning tag is not supported with the NCCL backend. Parameters tensor (Tensor) – Tensor to send. dst (int) – Destination rank on global process group (regardless of group argument). Destination rank should not be the same as the rank of the current process. group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used. tag (int, optional) – Tag to match send with remote recv group_dst (int, optional) – Destination rank on group. Invalid to specify both dst and group_dst. torch.distributed.recv(tensor, src=None, group=None, tag=0, group_src=None)[source]# Receives a tensor synchronously. Warning tag is not supported with the NCCL backend. Parameters tensor (Tensor) – Tensor to fill with received data. src (int, optional) – Source rank on global process group (regardless of group argument). Will receive from any process if unspecified. group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used. tag (int, optional) – Tag to match recv with remote send group_src (int, optional) – Destination rank on group. Invalid to specify both src and group_src. Returns Sender rank -1, if not part of the group Return type int isend() and irecv() return distributed request objects when used. In general, the type of this object is unspecified as they should never be created manually, but they are guaranteed to support two methods: is_completed() - returns True if the operation has finished wait() - will block the process until the operation is finished. is_completed() is guaranteed to return True once it returns. torch.distributed.isend(tensor, dst=None, group=None, tag=0, group_dst=None)[source]# Send a tensor asynchronously. Warning Modifying tensor before the request completes causes undefined behavior. Warning tag is not supported with the NCCL backend. Unlike send, which is blocking, isend allows src == dst rank, i.e. send to self. Parameters tensor (Tensor) – Tensor to send. dst (int) – Destination rank on global process group (regardless of group argument) group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used. tag (int, optional) – Tag to match send with remote recv group_dst (int, optional) – Destination rank on group. Invalid to specify both dst and group_dst Returns A distributed request object. None, if not part of the group Return type Optional[Work] torch.distributed.irecv(tensor, src=None, group=None, tag=0, group_src=None)[source]# Receives a tensor asynchronously. Warning tag is not supported with the NCCL backend. Unlike recv, which is blocking, irecv allows src == dst rank, i.e. recv from self. Parameters tensor (Tensor) – Tensor to fill with received data. src (int, optional) – Source rank on global process group (regardless of group argument). Will receive from any process if unspecified. group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used. tag (int, optional) – Tag to match recv with remote send group_src (int, optional) – Destination rank on group. Invalid to specify both src and group_src. Returns A distributed request object. None, if not part of the group Return type Optional[Work] torch.distributed.send_object_list(object_list, dst=None, group=None, device=None, group_dst=None, use_batch=False)[source]# Sends picklable objects in object_list synchronously. Similar to send(), but Python objects can be passed in. Note that all objects in object_list must be picklable in order to be sent. Parameters object_list (List[Any]) – List of input objects to sent. Each object must be picklable. Receiver must provide lists of equal sizes. dst (int) – Destination rank to send object_list to. Destination rank is based on global process group (regardless of group argument) group (Optional[ProcessGroup]) – (ProcessGroup, optional): The process group to work on. If None, the default process group will be used. Default is None. device (torch.device, optional) – If not None, the objects are serialized and converted to tensors which are moved to the device before sending. Default is None. group_dst (int, optional) – Destination rank on group. Must specify one of dst and group_dst but not both use_batch (bool, optional) – If True, use batch p2p operations instead of regular send operations. This avoids initializing 2-rank communicators and uses existing entire group communicators. See batch_isend_irecv for usage and assumptions. Default is False. Returns None. Note For NCCL-based process groups, internal tensor representations of objects must be moved to the GPU device before communication takes place. In this case, the device used is given by torch.cuda.current_device() and it is the user’s responsibility to ensure that this is set so that each rank has an individual GPU, via torch.cuda.set_device(). Warning Object collectives have a number of serious performance and scalability limitations. See Object collectives for details. Warning send_object_list() uses pickle module implicitly, which is known to be insecure. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling. Only call this function with data you trust. Warning Calling send_object_list() with GPU tensors is not well supported and inefficient as it incurs GPU -> CPU transfer since tensors would be pickled. Please consider using send() instead. Example::>>> # Note: Process group initialization omitted on each rank. >>> import torch.distributed as dist >>> # Assumes backend is not NCCL >>> device = torch.device("cpu") >>> if dist.get_rank() == 0: >>> # Assumes world_size of 2. >>> objects = ["foo", 12, {1: 2}] # any picklable object >>> dist.send_object_list(objects, dst=1, device=device) >>> else: >>> objects = [None, None, None] >>> dist.recv_object_list(objects, src=0, device=device) >>> objects ['foo', 12, {1: 2}] torch.distributed.recv_object_list(object_list, src=None, group=None, device=None, group_src=None, use_batch=False)[source]# Receives picklable objects in object_list synchronously. Similar to recv(), but can receive Python objects. Parameters object_list (List[Any]) – List of objects to receive into. Must provide a list of sizes equal to the size of the list being sent. src (int, optional) – Source rank from which to recv object_list. Source rank is based on global process group (regardless of group argument) Will receive from any rank if set to None. Default is None. group (Optional[ProcessGroup]) – (ProcessGroup, optional): The process group to work on. If None, the default process group will be used. Default is None. device (torch.device, optional) – If not None, receives on this device. Default is None. group_src (int, optional) – Destination rank on group. Invalid to specify both src and group_src. use_batch (bool, optional) – If True, use batch p2p operations instead of regular send operations. This avoids initializing 2-rank communicators and uses existing entire group communicators. See batch_isend_irecv for usage and assumptions. Default is False. Returns Sender rank. -1 if rank is not part of the group. If rank is part of the group, object_list will contain the sent objects from src rank. Note For NCCL-based process groups, internal tensor representations of objects must be moved to the GPU device before communication takes place. In this case, the device used is given by torch.cuda.current_device() and it is the user’s responsibility to ensure that this is set so that each rank has an individual GPU, via torch.cuda.set_device(). Warning Object collectives have a number of serious performance and scalability limitations. See Object collectives for details. Warning recv_object_list() uses pickle module implicitly, which is known to be insecure. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling. Only call this function with data you trust. Warning Calling recv_object_list() with GPU tensors is not well supported and inefficient as it incurs GPU -> CPU transfer since tensors would be pickled. Please consider using recv() instead. Example::>>> # Note: Process group initialization omitted on each rank. >>> import torch.distributed as dist >>> # Assumes backend is not NCCL >>> device = torch.device("cpu") >>> if dist.get_rank() == 0: >>> # Assumes world_size of 2. >>> objects = ["foo", 12, {1: 2}] # any picklable object >>> dist.send_object_list(objects, dst=1, device=device) >>> else: >>> objects = [None, None, None] >>> dist.recv_object_list(objects, src=0, device=device) >>> objects ['foo', 12, {1: 2}] torch.distributed.batch_isend_irecv(p2p_op_list)[source]# Send or Receive a batch of tensors asynchronously and return a list of requests. Process each of the operations in p2p_op_list and return the corresponding requests. NCCL, Gloo, and UCC backend are currently supported. Parameters p2p_op_list (list[torch.distributed.distributed_c10d.P2POp]) – A list of point-to-point operations(type of each operator is torch.distributed.P2POp). The order of the isend/irecv in the list matters and it needs to match with corresponding isend/irecv on the remote end. Returns A list of distributed request objects returned by calling the corresponding op in the op_list. Return type list[torch.distributed.distributed_c10d.Work] Examples >>> send_tensor = torch.arange(2, dtype=torch.float32) + 2 * rank >>> recv_tensor = torch.randn(2, dtype=torch.float32) >>> send_op = dist.P2POp(dist.isend, send_tensor, (rank + 1) % world_size) >>> recv_op = dist.P2POp( ... dist.irecv, recv_tensor, (rank - 1 + world_size) % world_size ... ) >>> reqs = batch_isend_irecv([send_op, recv_op]) >>> for req in reqs: >>> req.wait() >>> recv_tensor tensor([2, 3]) # Rank 0 tensor([0, 1]) # Rank 1 Note Note that when this API is used with the NCCL PG backend, users must set the current GPU device with torch.cuda.set_device, otherwise it will lead to unexpected hang issues. In addition, if this API is the first collective call in the group passed to dist.P2POp, all ranks of the group must participate in this API call; otherwise, the behavior is undefined. If this API call is not the first collective call in the group, batched P2P operations involving only a subset of ranks of the group are allowed. class torch.distributed.P2POp(op, tensor, peer=None, group=None, tag=0, group_peer=None)[source]# A class to build point-to-point operations for batch_isend_irecv. This class builds the type of P2P operation, communication buffer, peer rank, Process Group, and tag. Instances of this class will be passed to batch_isend_irecv for point-to-point communications. Parameters op (Callable) – A function to send data to or receive data from a peer process. The type of op is either torch.distributed.isend or torch.distributed.irecv. tensor (Tensor) – Tensor to send or receive. peer (int, optional) – Destination or source rank. group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used. tag (int, optional) – Tag to match send with recv. group_peer (int, optional) – Destination or source rank. Synchronous and asynchronous collective operations# Every collective operation function supports the following two kinds of operations, depending on the setting of the async_op flag passed into the collective: Synchronous operation - the default mode, when async_op is set to False. When the function returns, it is guaranteed that the collective operation is performed. In the case of CUDA operations, it is not guaranteed that the CUDA operation is completed, since CUDA operations are asynchronous. For CPU collectives, any further function calls utilizing the output of the collective call will behave as expected. For CUDA collectives, function calls utilizing the output on the same CUDA stream will behave as expected. Users must take care of synchronization under the scenario of running under different streams. For details on CUDA semantics such as stream synchronization, see CUDA Semantics. See the below script to see examples of differences in these semantics for CPU and CUDA operations. Asynchronous operation - when async_op is set to True. The collective operation function returns a distributed request object. In general, you don’t need to create it manually and it is guaranteed to support two methods: is_completed() - in the case of CPU collectives, returns True if completed. In the case of CUDA operations, returns True if the operation has been successfully enqueued onto a CUDA stream and the output can be utilized on the default stream without further synchronization. wait() - in the case of CPU collectives, will block the process until the operation is completed. In the case of CUDA collectives, will block the currently active CUDA stream until the operation is completed (but will not block the CPU). get_future() - returns torch._C.Future object. Supported for NCCL, also supported for most operations on GLOO and MPI, except for peer to peer operations. Note: as we continue adopting Futures and merging APIs, get_future() call might become redundant. Example The following code can serve as a reference regarding semantics for CUDA operations when using distributed collectives. It shows the explicit need to synchronize when using collective outputs on different CUDA streams: # Code runs on each rank. dist.init_process_group("nccl", rank=rank, world_size=2) output = torch.tensor([rank]).cuda(rank) s = torch.cuda.Stream() handle = dist.all_reduce(output, async_op=True) # Wait ensures the operation is enqueued, but not necessarily complete. handle.wait() # Using result on non-default stream. with torch.cuda.stream(s): s.wait_stream(torch.cuda.default_stream()) output.add_(100) if rank == 0: # if the explicit call to wait_stream was omitted, the output below will be # non-deterministically 1 or 101, depending on whether the allreduce overwrote # the value after the add completed. print(output) Collective functions# torch.distributed.broadcast(tensor, src=None, group=None, async_op=False, group_src=None)[source]# Broadcasts the tensor to the whole group. tensor must have the same number of elements in all processes participating in the collective. Parameters tensor (Tensor) – Data to be sent if src is the rank of current process, and tensor to be used to save received data otherwise. src (int) – Source rank on global process group (regardless of group argument). group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used. async_op (bool, optional) – Whether this op should be an async op group_src (int) – Source rank on group. Must specify one of group_src and src but not both. Returns Async work handle, if async_op is set to True. None, if not async_op or if not part of the group torch.distributed.broadcast_object_list(object_list, src=None, group=None, device=None, group_src=None)[source]# Broadcasts picklable objects in object_list to the whole group. Similar to broadcast(), but Python objects can be passed in. Note that all objects in object_list must be picklable in order to be broadcasted. Parameters object_list (List[Any]) – List of input objects to broadcast. Each object must be picklable. Only objects on the src rank will be broadcast, but each rank must provide lists of equal sizes. src (int) – Source rank from which to broadcast object_list. Source rank is based on global process group (regardless of group argument) group (Optional[ProcessGroup]) – (ProcessGroup, optional): The process group to work on. If None, the default process group will be used. Default is None. device (torch.device, optional) – If not None, the objects are serialized and converted to tensors which are moved to the device before broadcasting. Default is None. group_src (int) – Source rank on group. Must not specify one of group_src and src but not both. Returns None. If rank is part of the group, object_list will contain the broadcasted objects from src rank. Note For NCCL-based process groups, internal tensor representations of objects must be moved to the GPU device before communication takes place. In this case, the device used is given by torch.cuda.current_device() and it is the user’s responsibility to ensure that this is set so that each rank has an individual GPU, via torch.cuda.set_device(). Note Note that this API differs slightly from the broadcast() collective since it does not provide an async_op handle and thus will be a blocking call. Warning Object collectives have a number of serious performance and scalability limitations. See Object collectives for details. Warning broadcast_object_list() uses pickle module implicitly, which is known to be insecure. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling. Only call this function with data you trust. Warning Calling broadcast_object_list() with GPU tensors is not well supported and inefficient as it incurs GPU -> CPU transfer since tensors would be pickled. Please consider using broadcast() instead. Example::>>> # Note: Process group initialization omitted on each rank. >>> import torch.distributed as dist >>> if dist.get_rank() == 0: >>> # Assumes world_size of 3. >>> objects = ["foo", 12, {1: 2}] # any picklable object >>> else: >>> objects = [None, None, None] >>> # Assumes backend is not NCCL >>> device = torch.device("cpu") >>> dist.broadcast_object_list(objects, src=0, device=device) >>> objects ['foo', 12, {1: 2}] torch.distributed.all_reduce(tensor, op=<RedOpType.SUM: 0>, group=None, async_op=False)[source]# Reduces the tensor data across all machines in a way that all get the final result. After the call tensor is going to be bitwise identical in all processes. Complex tensors are supported. Parameters tensor (Tensor) – Input and output of the collective. The function operates in-place. op (optional) – One of the values from torch.distributed.ReduceOp enum. Specifies an operation used for element-wise reductions. group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used. async_op (bool, optional) – Whether this op should be an async op Returns Async work handle, if async_op is set to True. None, if not async_op or if not part of the group Examples >>> # All tensors below are of torch.int64 type. >>> # We have 2 process groups, 2 ranks. >>> device = torch.device(f"cuda:{rank}") >>> tensor = torch.arange(2, dtype=torch.int64, device=device) + 1 + 2 * rank >>> tensor tensor([1, 2], device='cuda:0') # Rank 0 tensor([3, 4], device='cuda:1') # Rank 1 >>> dist.all_reduce(tensor, op=ReduceOp.SUM) >>> tensor tensor([4, 6], device='cuda:0') # Rank 0 tensor([4, 6], device='cuda:1') # Rank 1 >>> # All tensors below are of torch.cfloat type. >>> # We have 2 process groups, 2 ranks. >>> tensor = torch.tensor( ... [1 + 1j, 2 + 2j], dtype=torch.cfloat, device=device ... ) + 2 * rank * (1 + 1j) >>> tensor tensor([1.+1.j, 2.+2.j], device='cuda:0') # Rank 0 tensor([3.+3.j, 4.+4.j], device='cuda:1') # Rank 1 >>> dist.all_reduce(tensor, op=ReduceOp.SUM) >>> tensor tensor([4.+4.j, 6.+6.j], device='cuda:0') # Rank 0 tensor([4.+4.j, 6.+6.j], device='cuda:1') # Rank 1 torch.distributed.reduce(tensor, dst=None, op=<RedOpType.SUM: 0>, group=None, async_op=False, group_dst=None)[source]# Reduces the tensor data across all machines. Only the process with rank dst is going to receive the final result. Parameters tensor (Tensor) – Input and output of the collective. The function operates in-place. dst (int) – Destination rank on global process group (regardless of group argument) op (optional) – One of the values from torch.distributed.ReduceOp enum. Specifies an operation used for element-wise reductions. group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used. async_op (bool, optional) – Whether this op should be an async op group_dst (int) – Destination rank on group. Must specify one of group_dst and dst but not both. Returns Async work handle, if async_op is set to True. None, if not async_op or if not part of the group torch.distributed.all_gather(tensor_list, tensor, group=None, async_op=False)[source]# Gathers tensors from the whole group in a list. Complex and uneven sized tensors are supported. Parameters tensor_list (list[Tensor]) – Output list. It should contain correctly-sized tensors to be used for output of the collective. Uneven sized tensors are supported. tensor (Tensor) – Tensor to be broadcast from current process. group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used. async_op (bool, optional) – Whether this op should be an async op Returns Async work handle, if async_op is set to True. None, if not async_op or if not part of the group Examples >>> # All tensors below are of torch.int64 dtype. >>> # We have 2 process groups, 2 ranks. >>> device = torch.device(f"cuda:{rank}") >>> tensor_list = [ ... torch.zeros(2, dtype=torch.int64, device=device) for _ in range(2) ... ] >>> tensor_list [tensor([0, 0], device='cuda:0'), tensor([0, 0], device='cuda:0')] # Rank 0 [tensor([0, 0], device='cuda:1'), tensor([0, 0], device='cuda:1')] # Rank 1 >>> tensor = torch.arange(2, dtype=torch.int64, device=device) + 1 + 2 * rank >>> tensor tensor([1, 2], device='cuda:0') # Rank 0 tensor([3, 4], device='cuda:1') # Rank 1 >>> dist.all_gather(tensor_list, tensor) >>> tensor_list [tensor([1, 2], device='cuda:0'), tensor([3, 4], device='cuda:0')] # Rank 0 [tensor([1, 2], device='cuda:1'), tensor([3, 4], device='cuda:1')] # Rank 1 >>> # All tensors below are of torch.cfloat dtype. >>> # We have 2 process groups, 2 ranks. >>> tensor_list = [ ... torch.zeros(2, dtype=torch.cfloat, device=device) for _ in range(2) ... ] >>> tensor_list [tensor([0.+0.j, 0.+0.j], device='cuda:0'), tensor([0.+0.j, 0.+0.j], device='cuda:0')] # Rank 0 [tensor([0.+0.j, 0.+0.j], device='cuda:1'), tensor([0.+0.j, 0.+0.j], device='cuda:1')] # Rank 1 >>> tensor = torch.tensor( ... [1 + 1j, 2 + 2j], dtype=torch.cfloat, device=device ... ) + 2 * rank * (1 + 1j) >>> tensor tensor([1.+1.j, 2.+2.j], device='cuda:0') # Rank 0 tensor([3.+3.j, 4.+4.j], device='cuda:1') # Rank 1 >>> dist.all_gather(tensor_list, tensor) >>> tensor_list [tensor([1.+1.j, 2.+2.j], device='cuda:0'), tensor([3.+3.j, 4.+4.j], device='cuda:0')] # Rank 0 [tensor([1.+1.j, 2.+2.j], device='cuda:1'), tensor([3.+3.j, 4.+4.j], device='cuda:1')] # Rank 1 torch.distributed.all_gather_into_tensor(output_tensor, input_tensor, group=None, async_op=False)[source]# Gather tensors from all ranks and put them in a single output tensor. This function requires all tensors to be the same size on each process. Parameters output_tensor (Tensor) – Output tensor to accommodate tensor elements from all ranks. It must be correctly sized to have one of the following forms: (i) a concatenation of all the input tensors along the primary dimension; for definition of “concatenation”, see torch.cat(); (ii) a stack of all the input tensors along the primary dimension; for definition of “stack”, see torch.stack(). Examples below may better explain the supported output forms. input_tensor (Tensor) – Tensor to be gathered from current rank. Different from the all_gather API, the input tensors in this API must have the same size across all ranks. group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used. async_op (bool, optional) – Whether this op should be an async op Returns Async work handle, if async_op is set to True. None, if not async_op or if not part of the group Examples >>> # All tensors below are of torch.int64 dtype and on CUDA devices. >>> # We have two ranks. >>> device = torch.device(f"cuda:{rank}") >>> tensor_in = torch.arange(2, dtype=torch.int64, device=device) + 1 + 2 * rank >>> tensor_in tensor([1, 2], device='cuda:0') # Rank 0 tensor([3, 4], device='cuda:1') # Rank 1 >>> # Output in concatenation form >>> tensor_out = torch.zeros(world_size * 2, dtype=torch.int64, device=device) >>> dist.all_gather_into_tensor(tensor_out, tensor_in) >>> tensor_out tensor([1, 2, 3, 4], device='cuda:0') # Rank 0 tensor([1, 2, 3, 4], device='cuda:1') # Rank 1 >>> # Output in stack form >>> tensor_out2 = torch.zeros(world_size, 2, dtype=torch.int64, device=device) >>> dist.all_gather_into_tensor(tensor_out2, tensor_in) >>> tensor_out2 tensor([[1, 2], [3, 4]], device='cuda:0') # Rank 0 tensor([[1, 2], [3, 4]], device='cuda:1') # Rank 1 torch.distributed.all_gather_object(object_list, obj, group=None)[source]# Gathers picklable objects from the whole group into a list. Similar to all_gather(), but Python objects can be passed in. Note that the object must be picklable in order to be gathered. Parameters object_list (list[Any]) – Output list. It should be correctly sized as the size of the group for this collective and will contain the output. obj (Any) – Pickable Python object to be broadcast from current process. group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used. Default is None. Returns None. If the calling rank is part of this group, the output of the collective will be populated into the input object_list. If the calling rank is not part of the group, the passed in object_list will be unmodified. Note Note that this API differs slightly from the all_gather() collective since it does not provide an async_op handle and thus will be a blocking call. Note For NCCL-based processed groups, internal tensor representations of objects must be moved to the GPU device before communication takes place. In this case, the device used is given by torch.cuda.current_device() and it is the user’s responsibility to ensure that this is set so that each rank has an individual GPU, via torch.cuda.set_device(). Warning Object collectives have a number of serious performance and scalability limitations. See Object collectives for details. Warning all_gather_object() uses pickle module implicitly, which is known to be insecure. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling. Only call this function with data you trust. Warning Calling all_gather_object() with GPU tensors is not well supported and inefficient as it incurs GPU -> CPU transfer since tensors would be pickled. Please consider using all_gather() instead. Example::>>> # Note: Process group initialization omitted on each rank. >>> import torch.distributed as dist >>> # Assumes world_size of 3. >>> gather_objects = ["foo", 12, {1: 2}] # any picklable object >>> output = [None for _ in gather_objects] >>> dist.all_gather_object(output, gather_objects[dist.get_rank()]) >>> output ['foo', 12, {1: 2}] torch.distributed.gather(tensor, gather_list=None, dst=None, group=None, async_op=False, group_dst=None)[source]# Gathers a list of tensors in a single process. This function requires all tensors to be the same size on each process. Parameters tensor (Tensor) – Input tensor. gather_list (list[Tensor], optional) – List of appropriately, same-sized tensors to use for gathered data (default is None, must be specified on the destination rank) dst (int, optional) – Destination rank on global process group (regardless of group argument). (If both dst and group_dst are None, default is global rank 0) group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used. async_op (bool, optional) – Whether this op should be an async op group_dst (int, optional) – Destination rank on group. Invalid to specify both dst and group_dst Returns Async work handle, if async_op is set to True. None, if not async_op or if not part of the group Note Note that all Tensors in gather_list must have the same size. Example::>>> # We have 2 process groups, 2 ranks. >>> tensor_size = 2 >>> device = torch.device(f'cuda:{rank}') >>> tensor = torch.ones(tensor_size, device=device) + rank >>> if dist.get_rank() == 0: >>> gather_list = [torch.zeros_like(tensor, device=device) for i in range(2)] >>> else: >>> gather_list = None >>> dist.gather(tensor, gather_list, dst=0) >>> # Rank 0 gets gathered data. >>> gather_list [tensor([1., 1.], device='cuda:0'), tensor([2., 2.], device='cuda:0')] # Rank 0 None # Rank 1 torch.distributed.gather_object(obj, object_gather_list=None, dst=None, group=None, group_dst=None)[source]# Gathers picklable objects from the whole group in a single process. Similar to gather(), but Python objects can be passed in. Note that the object must be picklable in order to be gathered. Parameters obj (Any) – Input object. Must be picklable. object_gather_list (list[Any]) – Output list. On the dst rank, it should be correctly sized as the size of the group for this collective and will contain the output. Must be None on non-dst ranks. (default is None) dst (int, optional) – Destination rank on global process group (regardless of group argument). (If both dst and group_dst are None, default is global rank 0) group (Optional[ProcessGroup]) – (ProcessGroup, optional): The process group to work on. If None, the default process group will be used. Default is None. group_dst (int, optional) – Destination rank on group. Invalid to specify both dst and group_dst Returns None. On the dst rank, object_gather_list will contain the output of the collective. Note Note that this API differs slightly from the gather collective since it does not provide an async_op handle and thus will be a blocking call. Note For NCCL-based processed groups, internal tensor representations of objects must be moved to the GPU device before communication takes place. In this case, the device used is given by torch.cuda.current_device() and it is the user’s responsibility to ensure that this is set so that each rank has an individual GPU, via torch.cuda.set_device(). Warning Object collectives have a number of serious performance and scalability limitations. See Object collectives for details. Warning gather_object() uses pickle module implicitly, which is known to be insecure. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling. Only call this function with data you trust. Warning Calling gather_object() with GPU tensors is not well supported and inefficient as it incurs GPU -> CPU transfer since tensors would be pickled. Please consider using gather() instead. Example::>>> # Note: Process group initialization omitted on each rank. >>> import torch.distributed as dist >>> # Assumes world_size of 3. >>> gather_objects = ["foo", 12, {1: 2}] # any picklable object >>> output = [None for _ in gather_objects] >>> dist.gather_object( ... gather_objects[dist.get_rank()], ... output if dist.get_rank() == 0 else None, ... dst=0 ... ) >>> # On rank 0 >>> output ['foo', 12, {1: 2}] torch.distributed.scatter(tensor, scatter_list=None, src=None, group=None, async_op=False, group_src=None)[source]# Scatters a list of tensors to all processes in a group. Each process will receive exactly one tensor and store its data in the tensor argument. Complex tensors are supported. Parameters tensor (Tensor) – Output tensor. scatter_list (list[Tensor]) – List of tensors to scatter (default is None, must be specified on the source rank) src (int) – Source rank on global process group (regardless of group argument). (If both src and group_src are None, default is global rank 0) group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used. async_op (bool, optional) – Whether this op should be an async op group_src (int, optional) – Source rank on group. Invalid to specify both src and group_src Returns Async work handle, if async_op is set to True. None, if not async_op or if not part of the group Note Note that all Tensors in scatter_list must have the same size. Example::>>> # Note: Process group initialization omitted on each rank. >>> import torch.distributed as dist >>> tensor_size = 2 >>> device = torch.device(f'cuda:{rank}') >>> output_tensor = torch.zeros(tensor_size, device=device) >>> if dist.get_rank() == 0: >>> # Assumes world_size of 2. >>> # Only tensors, all of which must be the same size. >>> t_ones = torch.ones(tensor_size, device=device) >>> t_fives = torch.ones(tensor_size, device=device) * 5 >>> scatter_list = [t_ones, t_fives] >>> else: >>> scatter_list = None >>> dist.scatter(output_tensor, scatter_list, src=0) >>> # Rank i gets scatter_list[i]. >>> output_tensor tensor([1., 1.], device='cuda:0') # Rank 0 tensor([5., 5.], device='cuda:1') # Rank 1 torch.distributed.scatter_object_list(scatter_object_output_list, scatter_object_input_list=None, src=None, group=None, group_src=None)[source]# Scatters picklable objects in scatter_object_input_list to the whole group. Similar to scatter(), but Python objects can be passed in. On each rank, the scattered object will be stored as the first element of scatter_object_output_list. Note that all objects in scatter_object_input_list must be picklable in order to be scattered. Parameters scatter_object_output_list (List[Any]) – Non-empty list whose first element will store the object scattered to this rank. scatter_object_input_list (List[Any], optional) – List of input objects to scatter. Each object must be picklable. Only objects on the src rank will be scattered, and the argument can be None for non-src ranks. src (int) – Source rank from which to scatter scatter_object_input_list. Source rank is based on global process group (regardless of group argument). (If both src and group_src are None, default is global rank 0) group (Optional[ProcessGroup]) – (ProcessGroup, optional): The process group to work on. If None, the default process group will be used. Default is None. group_src (int, optional) – Source rank on group. Invalid to specify both src and group_src Returns None. If rank is part of the group, scatter_object_output_list will have its first element set to the scattered object for this rank. Note Note that this API differs slightly from the scatter collective since it does not provide an async_op handle and thus will be a blocking call. Warning Object collectives have a number of serious performance and scalability limitations. See Object collectives for details. Warning scatter_object_list() uses pickle module implicitly, which is known to be insecure. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling. Only call this function with data you trust. Warning Calling scatter_object_list() with GPU tensors is not well supported and inefficient as it incurs GPU -> CPU transfer since tensors would be pickled. Please consider using scatter() instead. Example::>>> # Note: Process group initialization omitted on each rank. >>> import torch.distributed as dist >>> if dist.get_rank() == 0: >>> # Assumes world_size of 3. >>> objects = ["foo", 12, {1: 2}] # any picklable object >>> else: >>> # Can be any list on non-src ranks, elements are not used. >>> objects = [None, None, None] >>> output_list = [None] >>> dist.scatter_object_list(output_list, objects, src=0) >>> # Rank i gets objects[i]. For example, on rank 2: >>> output_list [{1: 2}] torch.distributed.reduce_scatter(output, input_list, op=<RedOpType.SUM: 0>, group=None, async_op=False)[source]# Reduces, then scatters a list of tensors to all processes in a group. Parameters output (Tensor) – Output tensor. input_list (list[Tensor]) – List of tensors to reduce and scatter. op (optional) – One of the values from torch.distributed.ReduceOp enum. Specifies an operation used for element-wise reductions. group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used. async_op (bool, optional) – Whether this op should be an async op. Returns Async work handle, if async_op is set to True. None, if not async_op or if not part of the group. torch.distributed.reduce_scatter_tensor(output, input, op=<RedOpType.SUM: 0>, group=None, async_op=False)[source]# Reduces, then scatters a tensor to all ranks in a group. Parameters output (Tensor) – Output tensor. It should have the same size across all ranks. input (Tensor) – Input tensor to be reduced and scattered. Its size should be output tensor size times the world size. The input tensor can have one of the following shapes: (i) a concatenation of the output tensors along the primary dimension, or (ii) a stack of the output tensors along the primary dimension. For definition of “concatenation”, see torch.cat(). For definition of “stack”, see torch.stack(). group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used. async_op (bool, optional) – Whether this op should be an async op. Returns Async work handle, if async_op is set to True. None, if not async_op or if not part of the group. Examples >>> # All tensors below are of torch.int64 dtype and on CUDA devices. >>> # We have two ranks. >>> device = torch.device(f"cuda:{rank}") >>> tensor_out = torch.zeros(2, dtype=torch.int64, device=device) >>> # Input in concatenation form >>> tensor_in = torch.arange(world_size * 2, dtype=torch.int64, device=device) >>> tensor_in tensor([0, 1, 2, 3], device='cuda:0') # Rank 0 tensor([0, 1, 2, 3], device='cuda:1') # Rank 1 >>> dist.reduce_scatter_tensor(tensor_out, tensor_in) >>> tensor_out tensor([0, 2], device='cuda:0') # Rank 0 tensor([4, 6], device='cuda:1') # Rank 1 >>> # Input in stack form >>> tensor_in = torch.reshape(tensor_in, (world_size, 2)) >>> tensor_in tensor([[0, 1], [2, 3]], device='cuda:0') # Rank 0 tensor([[0, 1], [2, 3]], device='cuda:1') # Rank 1 >>> dist.reduce_scatter_tensor(tensor_out, tensor_in) >>> tensor_out tensor([0, 2], device='cuda:0') # Rank 0 tensor([4, 6], device='cuda:1') # Rank 1 torch.distributed.all_to_all_single(output, input, output_split_sizes=None, input_split_sizes=None, group=None, async_op=False)[source]# Split input tensor and then scatter the split list to all processes in a group. Later the received tensors are concatenated from all the processes in the group and returned as a single output tensor. Complex tensors are supported. Parameters output (Tensor) – Gathered concatenated output tensor. input (Tensor) – Input tensor to scatter. output_split_sizes – (list[Int], optional): Output split sizes for dim 0 if specified None or empty, dim 0 of output tensor must divide equally by world_size. input_split_sizes – (list[Int], optional): Input split sizes for dim 0 if specified None or empty, dim 0 of input tensor must divide equally by world_size. group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used. async_op (bool, optional) – Whether this op should be an async op. Returns Async work handle, if async_op is set to True. None, if not async_op or if not part of the group. Warning all_to_all_single is experimental and subject to change. Examples >>> input = torch.arange(4) + rank * 4 >>> input tensor([0, 1, 2, 3]) # Rank 0 tensor([4, 5, 6, 7]) # Rank 1 tensor([8, 9, 10, 11]) # Rank 2 tensor([12, 13, 14, 15]) # Rank 3 >>> output = torch.empty([4], dtype=torch.int64) >>> dist.all_to_all_single(output, input) >>> output tensor([0, 4, 8, 12]) # Rank 0 tensor([1, 5, 9, 13]) # Rank 1 tensor([2, 6, 10, 14]) # Rank 2 tensor([3, 7, 11, 15]) # Rank 3 >>> # Essentially, it is similar to following operation: >>> scatter_list = list(input.chunk(world_size)) >>> gather_list = list(output.chunk(world_size)) >>> for i in range(world_size): >>> dist.scatter(gather_list[i], scatter_list if i == rank else [], src = i) >>> # Another example with uneven split >>> input tensor([0, 1, 2, 3, 4, 5]) # Rank 0 tensor([10, 11, 12, 13, 14, 15, 16, 17, 18]) # Rank 1 tensor([20, 21, 22, 23, 24]) # Rank 2 tensor([30, 31, 32, 33, 34, 35, 36]) # Rank 3 >>> input_splits [2, 2, 1, 1] # Rank 0 [3, 2, 2, 2] # Rank 1 [2, 1, 1, 1] # Rank 2 [2, 2, 2, 1] # Rank 3 >>> output_splits [2, 3, 2, 2] # Rank 0 [2, 2, 1, 2] # Rank 1 [1, 2, 1, 2] # Rank 2 [1, 2, 1, 1] # Rank 3 >>> output = ... >>> dist.all_to_all_single(output, input, output_splits, input_splits) >>> output tensor([ 0, 1, 10, 11, 12, 20, 21, 30, 31]) # Rank 0 tensor([ 2, 3, 13, 14, 22, 32, 33]) # Rank 1 tensor([ 4, 15, 16, 23, 34, 35]) # Rank 2 tensor([ 5, 17, 18, 24, 36]) # Rank 3 >>> # Another example with tensors of torch.cfloat type. >>> input = torch.tensor( ... [1 + 1j, 2 + 2j, 3 + 3j, 4 + 4j], dtype=torch.cfloat ... ) + 4 * rank * (1 + 1j) >>> input tensor([1+1j, 2+2j, 3+3j, 4+4j]) # Rank 0 tensor([5+5j, 6+6j, 7+7j, 8+8j]) # Rank 1 tensor([9+9j, 10+10j, 11+11j, 12+12j]) # Rank 2 tensor([13+13j, 14+14j, 15+15j, 16+16j]) # Rank 3 >>> output = torch.empty([4], dtype=torch.int64) >>> dist.all_to_all_single(output, input) >>> output tensor([1+1j, 5+5j, 9+9j, 13+13j]) # Rank 0 tensor([2+2j, 6+6j, 10+10j, 14+14j]) # Rank 1 tensor([3+3j, 7+7j, 11+11j, 15+15j]) # Rank 2 tensor([4+4j, 8+8j, 12+12j, 16+16j]) # Rank 3 torch.distributed.all_to_all(output_tensor_list, input_tensor_list, group=None, async_op=False)[source]# Scatters list of input tensors to all processes in a group and return gathered list of tensors in output list. Complex tensors are supported. Parameters output_tensor_list (list[Tensor]) – List of tensors to be gathered one per rank. input_tensor_list (list[Tensor]) – List of tensors to scatter one per rank. group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used. async_op (bool, optional) – Whether this op should be an async op. Returns Async work handle, if async_op is set to True. None, if not async_op or if not part of the group. Warning all_to_all is experimental and subject to change. Examples >>> input = torch.arange(4) + rank * 4 >>> input = list(input.chunk(4)) >>> input [tensor([0]), tensor([1]), tensor([2]), tensor([3])] # Rank 0 [tensor([4]), tensor([5]), tensor([6]), tensor([7])] # Rank 1 [tensor([8]), tensor([9]), tensor([10]), tensor([11])] # Rank 2 [tensor([12]), tensor([13]), tensor([14]), tensor([15])] # Rank 3 >>> output = list(torch.empty([4], dtype=torch.int64).chunk(4)) >>> dist.all_to_all(output, input) >>> output [tensor([0]), tensor([4]), tensor([8]), tensor([12])] # Rank 0 [tensor([1]), tensor([5]), tensor([9]), tensor([13])] # Rank 1 [tensor([2]), tensor([6]), tensor([10]), tensor([14])] # Rank 2 [tensor([3]), tensor([7]), tensor([11]), tensor([15])] # Rank 3 >>> # Essentially, it is similar to following operation: >>> scatter_list = input >>> gather_list = output >>> for i in range(world_size): >>> dist.scatter(gather_list[i], scatter_list if i == rank else [], src=i) >>> input tensor([0, 1, 2, 3, 4, 5]) # Rank 0 tensor([10, 11, 12, 13, 14, 15, 16, 17, 18]) # Rank 1 tensor([20, 21, 22, 23, 24]) # Rank 2 tensor([30, 31, 32, 33, 34, 35, 36]) # Rank 3 >>> input_splits [2, 2, 1, 1] # Rank 0 [3, 2, 2, 2] # Rank 1 [2, 1, 1, 1] # Rank 2 [2, 2, 2, 1] # Rank 3 >>> output_splits [2, 3, 2, 2] # Rank 0 [2, 2, 1, 2] # Rank 1 [1, 2, 1, 2] # Rank 2 [1, 2, 1, 1] # Rank 3 >>> input = list(input.split(input_splits)) >>> input [tensor([0, 1]), tensor([2, 3]), tensor([4]), tensor([5])] # Rank 0 [tensor([10, 11, 12]), tensor([13, 14]), tensor([15, 16]), tensor([17, 18])] # Rank 1 [tensor([20, 21]), tensor([22]), tensor([23]), tensor([24])] # Rank 2 [tensor([30, 31]), tensor([32, 33]), tensor([34, 35]), tensor([36])] # Rank 3 >>> output = ... >>> dist.all_to_all(output, input) >>> output [tensor([0, 1]), tensor([10, 11, 12]), tensor([20, 21]), tensor([30, 31])] # Rank 0 [tensor([2, 3]), tensor([13, 14]), tensor([22]), tensor([32, 33])] # Rank 1 [tensor([4]), tensor([15, 16]), tensor([23]), tensor([34, 35])] # Rank 2 [tensor([5]), tensor([17, 18]), tensor([24]), tensor([36])] # Rank 3 >>> # Another example with tensors of torch.cfloat type. >>> input = torch.tensor( ... [1 + 1j, 2 + 2j, 3 + 3j, 4 + 4j], dtype=torch.cfloat ... ) + 4 * rank * (1 + 1j) >>> input = list(input.chunk(4)) >>> input [tensor([1+1j]), tensor([2+2j]), tensor([3+3j]), tensor([4+4j])] # Rank 0 [tensor([5+5j]), tensor([6+6j]), tensor([7+7j]), tensor([8+8j])] # Rank 1 [tensor([9+9j]), tensor([10+10j]), tensor([11+11j]), tensor([12+12j])] # Rank 2 [tensor([13+13j]), tensor([14+14j]), tensor([15+15j]), tensor([16+16j])] # Rank 3 >>> output = list(torch.empty([4], dtype=torch.int64).chunk(4)) >>> dist.all_to_all(output, input) >>> output [tensor([1+1j]), tensor([5+5j]), tensor([9+9j]), tensor([13+13j])] # Rank 0 [tensor([2+2j]), tensor([6+6j]), tensor([10+10j]), tensor([14+14j])] # Rank 1 [tensor([3+3j]), tensor([7+7j]), tensor([11+11j]), tensor([15+15j])] # Rank 2 [tensor([4+4j]), tensor([8+8j]), tensor([12+12j]), tensor([16+16j])] # Rank 3 torch.distributed.barrier(group=None, async_op=False, device_ids=None)[source]# Synchronize all processes. This collective blocks processes until the whole group enters this function, if async_op is False, or if async work handle is called on wait(). Parameters group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used. async_op (bool, optional) – Whether this op should be an async op device_ids ([int], optional) – List of device/GPU ids. Only one id is expected. Returns Async work handle, if async_op is set to True. None, if not async_op or if not part of the group Note ProcessGroupNCCL now blocks the cpu thread till the completion of the barrier collective. Note ProcessGroupNCCL implements barrier as an all_reduce of a 1-element tensor. A device must be chosen for allocating this tensor. The device choice is made by checking in this order (1) the first device passed to device_ids arg of barrier if not None, (2) the device passed to init_process_group if not None, (3) the device that was first used with this process group, if another collective with tensor inputs has been performed, (4) the device index indicated by the global rank mod local device count. torch.distributed.monitored_barrier(group=None, timeout=None, wait_all_ranks=False)[source]# Synchronize processes similar to torch.distributed.barrier, but consider a configurable timeout. It is able to report ranks that did not pass this barrier within the provided timeout. Specifically, for non-zero ranks, will block until a send/recv is processed from rank 0. Rank 0 will block until all send /recv from other ranks are processed, and will report failures for ranks that failed to respond in time. Note that if one rank does not reach the monitored_barrier (for example due to a hang), all other ranks would fail in monitored_barrier. This collective will block all processes/ranks in the group, until the whole group exits the function successfully, making it useful for debugging and synchronizing. However, it can have a performance impact and should only be used for debugging or scenarios that require full synchronization points on the host-side. For debugging purposes, this barrier can be inserted before the application’s collective calls to check if any ranks are desynchronized. Note Note that this collective is only supported with the GLOO backend. Parameters group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used. timeout (datetime.timedelta, optional) – Timeout for monitored_barrier. If None, the default process group timeout will be used. wait_all_ranks (bool, optional) – Whether to collect all failed ranks or not. By default, this is False and monitored_barrier on rank 0 will throw on the first failed rank it encounters in order to fail fast. By setting wait_all_ranks=True monitored_barrier will collect all failed ranks and throw an error containing information about all failed ranks. Returns None. Example::>>> # Note: Process group initialization omitted on each rank. >>> import torch.distributed as dist >>> if dist.get_rank() != 1: >>> dist.monitored_barrier() # Raises exception indicating that >>> # rank 1 did not call into monitored_barrier. >>> # Example with wait_all_ranks=True >>> if dist.get_rank() == 0: >>> dist.monitored_barrier(wait_all_ranks=True) # Raises exception >>> # indicating that ranks 1, 2, ... world_size - 1 did not call into >>> # monitored_barrier. class torch.distributed.Work# A Work object represents the handle to a pending asynchronous operation in PyTorch’s distributed package. It is returned by non-blocking collective operations, such as dist.all_reduce(tensor, async_op=True). block_current_stream(self: torch._C._distributed_c10d.Work) → None# Blocks the currently active GPU stream on the operation to complete. For GPU based collectives this is equivalent to synchronize. For CPU initiated collectives such as with Gloo this will block the CUDA stream until the operation is complete. This returns immediately in all cases. To check whether an operation was successful you should check the Work object result asynchronously. boxed(self: torch._C._distributed_c10d.Work) → object# exception(self: torch._C._distributed_c10d.Work) → std::__exception_ptr::exception_ptr# get_future(self: torch._C._distributed_c10d.Work) → torch.Future# Returns A torch.futures.Future object which is associated with the completion of the Work. As an example, a future object can be retrieved by fut = process_group.allreduce(tensors).get_future(). Example::Below is an example of a simple allreduce DDP communication hook that uses get_future API to retrieve a Future associated with the completion of allreduce. >>> def allreduce(process_group: dist.ProcessGroup, bucket: dist.GradBucket): -> torch.futures.Future >>> group_to_use = process_group if process_group is not None else torch.distributed.group.WORLD >>> tensor = bucket.buffer().div_(group_to_use.size()) >>> return torch.distributed.all_reduce(tensor, group=group_to_use, async_op=True).get_future() >>> ddp_model.register_comm_hook(state=None, hook=allreduce) Warning get_future API supports NCCL, and partially GLOO and MPI backends (no support for peer-to-peer operations like send/recv) and will return a torch.futures.Future. In the example above, allreduce work will be done on GPU using NCCL backend, fut.wait() will return after synchronizing the appropriate NCCL streams with PyTorch’s current device streams to ensure we can have asynchronous CUDA execution and it does not wait for the entire operation to complete on GPU. Note that CUDAFuture does not support TORCH_NCCL_BLOCKING_WAIT flag or NCCL’s barrier(). In addition, if a callback function was added by fut.then(), it will wait until WorkNCCL’s NCCL streams synchronize with ProcessGroupNCCL’s dedicated callback stream and invoke the callback inline after running the callback on the callback stream. fut.then() will return another CUDAFuture that holds the return value of the callback and a CUDAEvent that recorded the callback stream. For CPU work, fut.done() returns true when work has been completed and value() tensors are ready. For GPU work, fut.done() returns true only whether the operation has been enqueued. For mixed CPU-GPU work (e.g. sending GPU tensors with GLOO), fut.done() returns true when tensors have arrived on respective nodes, but not yet necessarily synched on respective GPUs (similarly to GPU work). get_future_result(self: torch._C._distributed_c10d.Work) → torch.Future# Returns A torch.futures.Future object of int type which maps to the enum type of WorkResult As an example, a future object can be retrieved by fut = process_group.allreduce(tensor).get_future_result(). Example::users can use fut.wait() to blocking wait for the completion of the work and get the WorkResult by fut.value(). Also, users can use fut.then(call_back_func) to register a callback function to be called when the work is completed, without blocking the current thread. Warning get_future_result API supports NCCL is_completed(self: torch._C._distributed_c10d.Work) → bool# is_success(self: torch._C._distributed_c10d.Work) → bool# result(self: torch._C._distributed_c10d.Work) → list[torch.Tensor]# source_rank(self: torch._C._distributed_c10d.Work) → int# synchronize(self: torch._C._distributed_c10d.Work) → None# static unbox(arg0: object) → torch._C._distributed_c10d.Work# wait(self: torch._C._distributed_c10d.Work, timeout: datetime.timedelta = datetime.timedelta(0)) → bool# Returns true/false. Example:: try:work.wait(timeout) except:# some handling Warning In normal cases, users do not need to set the timeout. calling wait() is the same as calling synchronize(): Letting the current stream block on the completion of the NCCL work. However, if timeout is set, it will block the CPU thread until the NCCL work is completed or timed out. If timeout, exception will be thrown. class torch.distributed.ReduceOp# An enum-like class for available reduction operations: SUM, PRODUCT, MIN, MAX, BAND, BOR, BXOR, and PREMUL_SUM. BAND, BOR, and BXOR reductions are not available when using the NCCL backend. AVG divides values by the world size before summing across ranks. AVG is only available with the NCCL backend, and only for NCCL versions 2.10 or later. PREMUL_SUM multiplies inputs by a given scalar locally before reduction. PREMUL_SUM is only available with the NCCL backend, and only available for NCCL versions 2.11 or later. Users are supposed to use torch.distributed._make_nccl_premul_sum. Additionally, MAX, MIN and PRODUCT are not supported for complex tensors. The values of this class can be accessed as attributes, e.g., ReduceOp.SUM. They are used in specifying strategies for reduction collectives, e.g., reduce(). This class does not support __members__ property. class torch.distributed.reduce_op# Deprecated enum-like class for reduction operations: SUM, PRODUCT, MIN, and MAX. ReduceOp is recommended to use instead. Distributed Key-Value Store# The distributed package comes with a distributed key-value store, which can be used to share information between processes in the group as well as to initialize the distributed package in torch.distributed.init_process_group() (by explicitly creating the store as an alternative to specifying init_method.) There are 3 choices for Key-Value Stores: TCPStore, FileStore, and HashStore. class torch.distributed.Store# Base class for all store implementations, such as the 3 provided by PyTorch distributed: (TCPStore, FileStore, and HashStore). __init__(self: torch._C._distributed_c10d.Store) → None# add(self: torch._C._distributed_c10d.Store, arg0: str, arg1: SupportsInt) → int# The first call to add for a given key creates a counter associated with key in the store, initialized to amount. Subsequent calls to add with the same key increment the counter by the specified amount. Calling add() with a key that has already been set in the store by set() will result in an exception. Parameters key (str) – The key in the store whose counter will be incremented. amount (int) – The quantity by which the counter will be incremented. Example::>>> import torch.distributed as dist >>> from datetime import timedelta >>> # Using TCPStore as an example, other store types can also be used >>> store = dist.TCPStore("127.0.0.1", 0, 1, True, timedelta(seconds=30)) >>> store.add("first_key", 1) >>> store.add("first_key", 6) >>> # Should return 7 >>> store.get("first_key") append(self: torch._C._distributed_c10d.Store, arg0: str, arg1: str) → None# Append the key-value pair into the store based on the supplied key and value. If key does not exists in the store, it will be created. Parameters key (str) – The key to be appended to the store. value (str) – The value associated with key to be added to the store. Example::>>> import torch.distributed as dist >>> from datetime import timedelta >>> store = dist.TCPStore("127.0.0.1", 0, 1, True, timedelta(seconds=30)) >>> store.append("first_key", "po") >>> store.append("first_key", "tato") >>> # Should return "potato" >>> store.get("first_key") check(self: torch._C._distributed_c10d.Store, arg0: collections.abc.Sequence[str]) → bool# The call to check whether a given list of keys have value stored in the store. This call immediately returns in normal cases but still suffers from some edge deadlock cases, e.g, calling check after TCPStore has been destroyed. Calling check() with a list of keys that one wants to check whether stored in the store or not. Parameters keys (list[str]) – The keys to query whether stored in the store. Example::>>> import torch.distributed as dist >>> from datetime import timedelta >>> # Using TCPStore as an example, other store types can also be used >>> store = dist.TCPStore("127.0.0.1", 0, 1, True, timedelta(seconds=30)) >>> store.add("first_key", 1) >>> # Should return 7 >>> store.check(["first_key"]) clone(self: torch._C._distributed_c10d.Store) → torch._C._distributed_c10d.Store# Clones the store and returns a new object that points to the same underlying store. The returned store can be used concurrently with the original object. This is intended to provide a safe way to use a store from multiple threads by cloning one store per thread. compare_set(self: torch._C._distributed_c10d.Store, arg0: str, arg1: str, arg2: str) → bytes# Inserts the key-value pair into the store based on the supplied key and performs comparison between expected_value and desired_value before inserting. desired_value will only be set if expected_value for the key already exists in the store or if expected_value is an empty string. Parameters key (str) – The key to be checked in the store. expected_value (str) – The value associated with key to be checked before insertion. desired_value (str) – The value associated with key to be added to the store. Example::>>> import torch.distributed as dist >>> from datetime import timedelta >>> store = dist.TCPStore("127.0.0.1", 0, 1, True, timedelta(seconds=30)) >>> store.set("key", "first_value") >>> store.compare_set("key", "first_value", "second_value") >>> # Should return "second_value" >>> store.get("key") delete_key(self: torch._C._distributed_c10d.Store, arg0: str) → bool# Deletes the key-value pair associated with key from the store. Returns true if the key was successfully deleted, and false if it was not. Warning The delete_key API is only supported by the TCPStore and HashStore. Using this API with the FileStore will result in an exception. Parameters key (str) – The key to be deleted from the store Returns True if key was deleted, otherwise False. Example::>>> import torch.distributed as dist >>> from datetime import timedelta >>> # Using TCPStore as an example, HashStore can also be used >>> store = dist.TCPStore("127.0.0.1", 0, 1, True, timedelta(seconds=30)) >>> store.set("first_key") >>> # This should return true >>> store.delete_key("first_key") >>> # This should return false >>> store.delete_key("bad_key") get(self: torch._C._distributed_c10d.Store, arg0: str) → bytes# Retrieves the value associated with the given key in the store. If key is not present in the store, the function will wait for timeout, which is defined when initializing the store, before throwing an exception. Parameters key (str) – The function will return the value associated with this key. Returns Value associated with key if key is in the store. Example::>>> import torch.distributed as dist >>> from datetime import timedelta >>> store = dist.TCPStore("127.0.0.1", 0, 1, True, timedelta(seconds=30)) >>> store.set("first_key", "first_value") >>> # Should return "first_value" >>> store.get("first_key") has_extended_api(self: torch._C._distributed_c10d.Store) → bool# Returns true if the store supports extended operations. multi_get(self: torch._C._distributed_c10d.Store, arg0: collections.abc.Sequence[str]) → list[bytes]# Retrieve all values in keys. If any key in keys is not present in the store, the function will wait for timeout Parameters keys (List[str]) – The keys to be retrieved from the store. Example::>>> import torch.distributed as dist >>> from datetime import timedelta >>> store = dist.TCPStore("127.0.0.1", 0, 1, True, timedelta(seconds=30)) >>> store.set("first_key", "po") >>> store.set("second_key", "tato") >>> # Should return [b"po", b"tato"] >>> store.multi_get(["first_key", "second_key"]) multi_set(self: torch._C._distributed_c10d.Store, arg0: collections.abc.Sequence[str], arg1: collections.abc.Sequence[str]) → None# Inserts a list key-value pair into the store based on the supplied keys and values Parameters keys (List[str]) – The keys to insert. values (List[str]) – The values to insert. Example::>>> import torch.distributed as dist >>> from datetime import timedelta >>> store = dist.TCPStore("127.0.0.1", 0, 1, True, timedelta(seconds=30)) >>> store.multi_set(["first_key", "second_key"], ["po", "tato"]) >>> # Should return b"po" >>> store.get("first_key") num_keys(self: torch._C._distributed_c10d.Store) → int# Returns the number of keys set in the store. Note that this number will typically be one greater than the number of keys added by set() and add() since one key is used to coordinate all the workers using the store. Warning When used with the TCPStore, num_keys returns the number of keys written to the underlying file. If the store is destructed and another store is created with the same file, the original keys will be retained. Returns The number of keys present in the store. Example::>>> import torch.distributed as dist >>> from datetime import timedelta >>> # Using TCPStore as an example, other store types can also be used >>> store = dist.TCPStore("127.0.0.1", 0, 1, True, timedelta(seconds=30)) >>> store.set("first_key", "first_value") >>> # This should return 2 >>> store.num_keys() queue_len(self: torch._C._distributed_c10d.Store, arg0: str) → int# Returns the length of the specified queue. If the queue doesn’t exist it returns 0. See queue_push for more details. Parameters key (str) – The key of the queue to get the length. queue_pop(self: torch._C._distributed_c10d.Store, key: str, block: bool = True) → bytes# Pops a value from the specified queue or waits until timeout if the queue is empty. See queue_push for more details. If block is False, a dist.QueueEmptyError will be raised if the queue is empty. Parameters key (str) – The key of the queue to pop from. block (bool) – Whether to block waiting for the key or immediately return. queue_push(self: torch._C._distributed_c10d.Store, arg0: str, arg1: str) → None# Pushes a value into the specified queue. Using the same key for queues and set/get operations may result in unexpected behavior. wait/check operations are supported for queues. wait with queues will only wake one waiting worker rather than all. Parameters key (str) – The key of the queue to push to. value (str) – The value to push into the queue. set(self: torch._C._distributed_c10d.Store, arg0: str, arg1: str) → None# Inserts the key-value pair into the store based on the supplied key and value. If key already exists in the store, it will overwrite the old value with the new supplied value. Parameters key (str) – The key to be added to the store. value (str) – The value associated with key to be added to the store. Example::>>> import torch.distributed as dist >>> from datetime import timedelta >>> store = dist.TCPStore("127.0.0.1", 0, 1, True, timedelta(seconds=30)) >>> store.set("first_key", "first_value") >>> # Should return "first_value" >>> store.get("first_key") set_timeout(self: torch._C._distributed_c10d.Store, arg0: datetime.timedelta) → None# Sets the store’s default timeout. This timeout is used during initialization and in wait() and get(). Parameters timeout (timedelta) – timeout to be set in the store. Example::>>> import torch.distributed as dist >>> from datetime import timedelta >>> # Using TCPStore as an example, other store types can also be used >>> store = dist.TCPStore("127.0.0.1", 0, 1, True, timedelta(seconds=30)) >>> store.set_timeout(timedelta(seconds=10)) >>> # This will throw an exception after 10 seconds >>> store.wait(["bad_key"]) property timeout# Gets the timeout of the store. wait(*args, **kwargs)# Overloaded function. wait(self: torch._C._distributed_c10d.Store, arg0: collections.abc.Sequence[str]) -> None Waits for each key in keys to be added to the store. If not all keys are set before the timeout (set during store initialization), then wait will throw an exception. Parameters keys (list) – List of keys on which to wait until they are set in the store. Example::>>> import torch.distributed as dist >>> from datetime import timedelta >>> # Using TCPStore as an example, other store types can also be used >>> store = dist.TCPStore("127.0.0.1", 0, 1, True, timedelta(seconds=30)) >>> # This will throw an exception after 30 seconds >>> store.wait(["bad_key"]) wait(self: torch._C._distributed_c10d.Store, arg0: collections.abc.Sequence[str], arg1: datetime.timedelta) -> None Waits for each key in keys to be added to the store, and throws an exception if the keys have not been set by the supplied timeout. Parameters keys (list) – List of keys on which to wait until they are set in the store. timeout (timedelta) – Time to wait for the keys to be added before throwing an exception. Example::>>> import torch.distributed as dist >>> from datetime import timedelta >>> # Using TCPStore as an example, other store types can also be used >>> store = dist.TCPStore("127.0.0.1", 0, 1, True, timedelta(seconds=30)) >>> # This will throw an exception after 10 seconds >>> store.wait(["bad_key"], timedelta(seconds=10)) class torch.distributed.TCPStore# A TCP-based distributed key-value store implementation. The server store holds the data, while the client stores can connect to the server store over TCP and perform actions such as set() to insert a key-value pair, get() to retrieve a key-value pair, etc. There should always be one server store initialized because the client store(s) will wait for the server to establish a connection. Parameters host_name (str) – The hostname or IP Address the server store should run on. port (int) – The port on which the server store should listen for incoming requests. world_size (int, optional) – The total number of store users (number of clients + 1 for the server). Default is None (None indicates a non-fixed number of store users). is_master (bool, optional) – True when initializing the server store and False for client stores. Default is False. timeout (timedelta, optional) – Timeout used by the store during initialization and for methods such as get() and wait(). Default is timedelta(seconds=300) wait_for_workers (bool, optional) – Whether to wait for all the workers to connect with the server store. This is only applicable when world_size is a fixed value. Default is True. multi_tenant (bool, optional) – If True, all TCPStore instances in the current process with the same host/port will use the same underlying TCPServer. Default is False. master_listen_fd (int, optional) – If specified, the underlying TCPServer will listen on this file descriptor, which must be a socket already bound to port. To bind an ephemeral port we recommend setting the port to 0 and reading .port. Default is None (meaning the server creates a new socket and attempts to bind it to port). use_libuv (bool, optional) – If True, use libuv for TCPServer backend. Default is True. Example::>>> import torch.distributed as dist >>> from datetime import timedelta >>> # Run on process 1 (server) >>> server_store = dist.TCPStore("127.0.0.1", 1234, 2, True, timedelta(seconds=30)) >>> # Run on process 2 (client) >>> client_store = dist.TCPStore("127.0.0.1", 1234, 2, False) >>> # Use any of the store methods from either the client or server after initialization >>> server_store.set("first_key", "first_value") >>> client_store.get("first_key") __init__(self: torch._C._distributed_c10d.TCPStore, host_name: str, port: SupportsInt, world_size: SupportsInt | None = None, is_master: bool = False, timeout: datetime.timedelta = datetime.timedelta(seconds=300), wait_for_workers: bool = True, multi_tenant: bool = False, master_listen_fd: SupportsInt | None = None, use_libuv: bool = True) → None# Creates a new TCPStore. property host# Gets the hostname on which the store listens for requests. property libuvBackend# Returns True if it’s using the libuv backend. property port# Gets the port number on which the store listens for requests. class torch.distributed.HashStore# A thread-safe store implementation based on an underlying hashmap. This store can be used within the same process (for example, by other threads), but cannot be used across processes. Example::>>> import torch.distributed as dist >>> store = dist.HashStore() >>> # store can be used from other threads >>> # Use any of the store methods after initialization >>> store.set("first_key", "first_value") __init__(self: torch._C._distributed_c10d.HashStore) → None# Creates a new HashStore. class torch.distributed.FileStore# A store implementation that uses a file to store the underlying key-value pairs. Parameters file_name (str) – path of the file in which to store the key-value pairs world_size (int, optional) – The total number of processes using the store. Default is -1 (a negative value indicates a non-fixed number of store users). Example::>>> import torch.distributed as dist >>> store1 = dist.FileStore("/tmp/filestore", 2) >>> store2 = dist.FileStore("/tmp/filestore", 2) >>> # Use any of the store methods from either the client or server after initialization >>> store1.set("first_key", "first_value") >>> store2.get("first_key") __init__(self: torch._C._distributed_c10d.FileStore, file_name: str, world_size: SupportsInt = -1) → None# Creates a new FileStore. property path# Gets the path of the file used by FileStore to store key-value pairs. class torch.distributed.PrefixStore# A wrapper around any of the 3 key-value stores (TCPStore, FileStore, and HashStore) that adds a prefix to each key inserted to the store. Parameters prefix (str) – The prefix string that is prepended to each key before being inserted into the store. store (torch.distributed.store) – A store object that forms the underlying key-value store. __init__(self: torch._C._distributed_c10d.PrefixStore, prefix: str, store: torch._C._distributed_c10d.Store) → None# Creates a new PrefixStore. property underlying_store# Gets the underlying store object that PrefixStore wraps around. Profiling Collective Communication# Note that you can use torch.profiler (recommended, only available after 1.8.1) or torch.autograd.profiler to profile collective communication and point-to-point communication APIs mentioned here. All out-of-the-box backends (gloo, nccl, mpi) are supported and collective communication usage will be rendered as expected in profiling output/traces. Profiling your code is the same as any regular torch operator: import torch import torch.distributed as dist with torch.profiler(): tensor = torch.randn(20, 10) dist.all_reduce(tensor) Please refer to the profiler documentation for a full overview of profiler features. Multi-GPU collective functions# Warning The multi-GPU functions (which stand for multiple GPUs per CPU thread) are deprecated. As of today, PyTorch Distributed’s preferred programming model is one device per thread, as exemplified by the APIs in this document. If you are a backend developer and want to support multiple devices per thread, please contact PyTorch Distributed’s maintainers. Object collectives# Warning Object collectives have a number of serious limitations. Read further to determine if they are safe to use for your use case. Object collectives are a set of collective-like operations that work on arbitrary Python objects, as long as they can be pickled. There are various collective patterns implemented (e.g. broadcast, all_gather, …) but they each roughly follow this pattern: convert the input object into a pickle (raw bytes), then shove it into a byte tensor communicate the size of this byte tensor to peers (first collective operation) allocate appropriately sized tensor to perform the real collective communicate the object data (second collective operation) convert raw data back into Python (unpickle) Object collectives sometimes have surprising performance or memory characteristics that lead to long runtimes or OOMs, and thus they should be used with caution. Here are some common issues. Asymmetric pickle/unpickle time - Pickling objects can be slow, depending on the number, type and size of the objects. When the collective has a fan-in (e.g. gather_object), the receiving rank(s) must unpickle N times more objects than the sending rank(s) had to pickle, which can cause other ranks to time out on their next collective. Inefficient tensor communication - Tensors should be sent via regular collective APIs, not object collective APIs. It is possible to send Tensors via object collective APIs, but they will be serialized and deserialized (including a CPU-sync and device-to-host copy in the case of non-CPU tensors), and in almost every case other than debugging or troubleshooting code, it would be worth the trouble to refactor the code to use non-object collectives instead. Unexpected tensor devices - If you still want to send tensors via object collectives, there is another aspect specific to cuda (and possibly other accelerators) tensors. If you pickle a tensor that is currently on cuda:3, and then unpickle it, you will get another tensor on cuda:3 regardless of which process you are on, or which CUDA device is the ‘default’ device for that process. With regular tensor collective APIs, ‘output tensors’ will always be on the same, local device, which is generally what you’d expect. Unpickling a tensor will implicitly activate a CUDA context if it is the first time a GPU is used by the process, which can waste significant amounts of GPU memory. This issue can be avoided by moving tensors to CPU before passing them as inputs to an object collective. Third-party backends# Besides the builtin GLOO/MPI/NCCL backends, PyTorch distributed supports third-party backends through a run-time register mechanism. For references on how to develop a third-party backend through C++ Extension, please refer to Tutorials - Custom C++ and CUDA Extensions and test/cpp_extensions/cpp_c10d_extension.cpp. The capability of third-party backends are decided by their own implementations. The new backend derives from c10d::ProcessGroup and registers the backend name and the instantiating interface through torch.distributed.Backend.register_backend() when imported. When manually importing this backend and invoking torch.distributed.init_process_group() with the corresponding backend name, the torch.distributed package runs on the new backend. Warning The support of third-party backend is experimental and subject to change. Launch utility# The torch.distributed package also provides a launch utility in torch.distributed.launch. This helper utility can be used to launch multiple processes per node for distributed training. Module torch.distributed.launch. torch.distributed.launch is a module that spawns up multiple distributed training processes on each of the training nodes. Warning This module is going to be deprecated in favor of torchrun. The utility can be used for single-node distributed training, in which one or more processes per node will be spawned. The utility can be used for either CPU training or GPU training. If the utility is used for GPU training, each distributed process will be operating on a single GPU. This can achieve well-improved single-node training performance. It can also be used in multi-node distributed training, by spawning up multiple processes on each node for well-improved multi-node distributed training performance as well. This will especially be beneficial for systems with multiple Infiniband interfaces that have direct-GPU support, since all of them can be utilized for aggregated communication bandwidth. In both cases of single-node distributed training or multi-node distributed training, this utility will launch the given number of processes per node (--nproc-per-node). If used for GPU training, this number needs to be less or equal to the number of GPUs on the current system (nproc_per_node), and each process will be operating on a single GPU from GPU 0 to GPU (nproc_per_node - 1). How to use this module: Single-Node multi-process distributed training python -m torch.distributed.launch --nproc-per-node=NUM_GPUS_YOU_HAVE YOUR_TRAINING_SCRIPT.py (--arg1 --arg2 --arg3 and all other arguments of your training script) Multi-Node multi-process distributed training: (e.g. two nodes) Node 1: (IP: 192.168.1.1, and has a free port: 1234) python -m torch.distributed.launch --nproc-per-node=NUM_GPUS_YOU_HAVE --nnodes=2 --node-rank=0 --master-addr="192.168.1.1" --master-port=1234 YOUR_TRAINING_SCRIPT.py (--arg1 --arg2 --arg3 and all other arguments of your training script) Node 2: python -m torch.distributed.launch --nproc-per-node=NUM_GPUS_YOU_HAVE --nnodes=2 --node-rank=1 --master-addr="192.168.1.1" --master-port=1234 YOUR_TRAINING_SCRIPT.py (--arg1 --arg2 --arg3 and all other arguments of your training script) To look up what optional arguments this module offers: python -m torch.distributed.launch --help Important Notices: 1. This utility and multi-process distributed (single-node or multi-node) GPU training currently only achieves the best performance using the NCCL distributed backend. Thus NCCL backend is the recommended backend to use for GPU training. 2. In your training program, you must parse the command-line argument: --local-rank=LOCAL_PROCESS_RANK, which will be provided by this module. If your training program uses GPUs, you should ensure that your code only runs on the GPU device of LOCAL_PROCESS_RANK. This can be done by: Parsing the local_rank argument >>> import argparse >>> parser = argparse.ArgumentParser() >>> parser.add_argument("--local-rank", "--local_rank", type=int) >>> args = parser.parse_args() Set your device to local rank using either >>> torch.cuda.set_device(args.local_rank) # before your code runs or >>> with torch.cuda.device(args.local_rank): >>> # your code to run >>> ... Changed in version 2.0.0: The launcher will passes the --local-rank=<rank> argument to your script. From PyTorch 2.0.0 onwards, the dashed --local-rank is preferred over the previously used underscored --local_rank. For backward compatibility, it may be necessary for users to handle both cases in their argument parsing code. This means including both "--local-rank" and "--local_rank" in the argument parser. If only "--local_rank" is provided, the launcher will trigger an error: “error: unrecognized arguments: –local-rank=<rank>”. For training code that only supports PyTorch 2.0.0+, including "--local-rank" should be sufficient. 3. In your training program, you are supposed to call the following function at the beginning to start the distributed backend. It is strongly recommended that init_method=env://. Other init methods (e.g. tcp://) may work, but env:// is the one that is officially supported by this module. >>> torch.distributed.init_process_group(backend='YOUR BACKEND', >>> init_method='env://') 4. In your training program, you can either use regular distributed functions or use torch.nn.parallel.DistributedDataParallel() module. If your training program uses GPUs for training and you would like to use torch.nn.parallel.DistributedDataParallel() module, here is how to configure it. >>> model = torch.nn.parallel.DistributedDataParallel(model, >>> device_ids=[args.local_rank], >>> output_device=args.local_rank) Please ensure that device_ids argument is set to be the only GPU device id that your code will be operating on. This is generally the local rank of the process. In other words, the device_ids needs to be [args.local_rank], and output_device needs to be args.local_rank in order to use this utility 5. Another way to pass local_rank to the subprocesses via environment variable LOCAL_RANK. This behavior is enabled when you launch the script with --use-env=True. You must adjust the subprocess example above to replace args.local_rank with os.environ['LOCAL_RANK']; the launcher will not pass --local-rank when you specify this flag. Warning local_rank is NOT globally unique: it is only unique per process on a machine. Thus, don’t use it to decide if you should, e.g., write to a networked filesystem. See pytorch/pytorch#12042 for an example of how things can go wrong if you don’t do this correctly. Spawn utility# The Multiprocessing package - torch.multiprocessing package also provides a spawn function in torch.multiprocessing.spawn(). This helper function can be used to spawn multiple processes. It works by passing in the function that you want to run and spawns N processes to run it. This can be used for multiprocess distributed training as well. For references on how to use it, please refer to PyTorch example - ImageNet implementation Note that this function requires Python 3.4 or higher. Debugging torch.distributed applications# Debugging distributed applications can be challenging due to hard to understand hangs, crashes, or inconsistent behavior across ranks. torch.distributed provides a suite of tools to help debug training applications in a self-serve fashion: Python Breakpoint# It is extremely convenient to use python’s debugger in a distributed environment, but because it does not work out of the box many people do not use it at all. PyTorch offers a customized wrapper around pdb that streamlines the process. torch.distributed.breakpoint makes this process easy. Internally, it customizes pdb’s breakpoint behavior in two ways but otherwise behaves as normal pdb. Attaches the debugger only on one rank (specified by the user). Ensures all other ranks stop, by using a torch.distributed.barrier() that will release once the debugged rank issues a continue Reroutes stdin from the child process such that it connects to your terminal. To use it, simply issue torch.distributed.breakpoint(rank) on all ranks, using the same value for rank in each case. Monitored Barrier# As of v1.10, torch.distributed.monitored_barrier() exists as an alternative to torch.distributed.barrier() which fails with helpful information about which rank may be faulty when crashing, i.e. not all ranks calling into torch.distributed.monitored_barrier() within the provided timeout. torch.distributed.monitored_barrier() implements a host-side barrier using send/recv communication primitives in a process similar to acknowledgements, allowing rank 0 to report which rank(s) failed to acknowledge the barrier in time. As an example, consider the following function where rank 1 fails to call into torch.distributed.monitored_barrier() (in practice this could be due to an application bug or hang in a previous collective): import os from datetime import timedelta import torch import torch.distributed as dist import torch.multiprocessing as mp def worker(rank): dist.init_process_group("nccl", rank=rank, world_size=2) # monitored barrier requires gloo process group to perform host-side sync. group_gloo = dist.new_group(backend="gloo") if rank not in [1]: dist.monitored_barrier(group=group_gloo, timeout=timedelta(seconds=2)) if __name__ == "__main__": os.environ["MASTER_ADDR"] = "localhost" os.environ["MASTER_PORT"] = "29501" mp.spawn(worker, nprocs=2, args=()) The following error message is produced on rank 0, allowing the user to determine which rank(s) may be faulty and investigate further: RuntimeError: Rank 1 failed to pass monitoredBarrier in 2000 ms Original exception: [gloo/transport/tcp/pair.cc:598] Connection closed by peer [2401:db00:eef0:1100:3560:0:1c05:25d]:8594 TORCH_DISTRIBUTED_DEBUG# With TORCH_CPP_LOG_LEVEL=INFO, the environment variable TORCH_DISTRIBUTED_DEBUG can be used to trigger additional useful logging and collective synchronization checks to ensure all ranks are synchronized appropriately. TORCH_DISTRIBUTED_DEBUG can be set to either OFF (default), INFO, or DETAIL depending on the debugging level required. Please note that the most verbose option, DETAIL may impact the application performance and thus should only be used when debugging issues. Setting TORCH_DISTRIBUTED_DEBUG=INFO will result in additional debug logging when models trained with torch.nn.parallel.DistributedDataParallel() are initialized, and TORCH_DISTRIBUTED_DEBUG=DETAIL will additionally log runtime performance statistics a select number of iterations. These runtime statistics include data such as forward time, backward time, gradient communication time, etc. As an example, given the following application: import os import torch import torch.distributed as dist import torch.multiprocessing as mp class TwoLinLayerNet(torch.nn.Module): def __init__(self): super().__init__() self.a = torch.nn.Linear(10, 10, bias=False) self.b = torch.nn.Linear(10, 1, bias=False) def forward(self, x): a = self.a(x) b = self.b(x) return (a, b) def worker(rank): dist.init_process_group("nccl", rank=rank, world_size=2) torch.cuda.set_device(rank) print("init model") model = TwoLinLayerNet().cuda() print("init ddp") ddp_model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[rank]) inp = torch.randn(10, 10).cuda() print("train") for _ in range(20): output = ddp_model(inp) loss = output[0] + output[1] loss.sum().backward() if __name__ == "__main__": os.environ["MASTER_ADDR"] = "localhost" os.environ["MASTER_PORT"] = "29501" os.environ["TORCH_CPP_LOG_LEVEL"]="INFO" os.environ[ "TORCH_DISTRIBUTED_DEBUG" ] = "DETAIL" # set to DETAIL for runtime logging. mp.spawn(worker, nprocs=2, args=()) The following logs are rendered at initialization time: I0607 16:10:35.739390 515217 logger.cpp:173] [Rank 0]: DDP Initialized with: broadcast_buffers: 1 bucket_cap_bytes: 26214400 find_unused_parameters: 0 gradient_as_bucket_view: 0 is_multi_device_module: 0 iteration: 0 num_parameter_tensors: 2 output_device: 0 rank: 0 total_parameter_size_bytes: 440 world_size: 2 backend_name: nccl bucket_sizes: 440 cuda_visible_devices: N/A device_ids: 0 dtypes: float master_addr: localhost master_port: 29501 module_name: TwoLinLayerNet nccl_async_error_handling: N/A nccl_blocking_wait: N/A nccl_debug: WARN nccl_ib_timeout: N/A nccl_nthreads: N/A nccl_socket_ifname: N/A torch_distributed_debug: INFO The following logs are rendered during runtime (when TORCH_DISTRIBUTED_DEBUG=DETAIL is set): I0607 16:18:58.085681 544067 logger.cpp:344] [Rank 1 / 2] Training TwoLinLayerNet unused_parameter_size=0 Avg forward compute time: 40838608 Avg backward compute time: 5983335 Avg backward comm. time: 4326421 Avg backward comm/comp overlap time: 4207652 I0607 16:18:58.085693 544066 logger.cpp:344] [Rank 0 / 2] Training TwoLinLayerNet unused_parameter_size=0 Avg forward compute time: 42850427 Avg backward compute time: 3885553 Avg backward comm. time: 2357981 Avg backward comm/comp overlap time: 2234674 In addition, TORCH_DISTRIBUTED_DEBUG=INFO enhances crash logging in torch.nn.parallel.DistributedDataParallel() due to unused parameters in the model. Currently, find_unused_parameters=True must be passed into torch.nn.parallel.DistributedDataParallel() initialization if there are parameters that may be unused in the forward pass, and as of v1.10, all model outputs are required to be used in loss computation as torch.nn.parallel.DistributedDataParallel() does not support unused parameters in the backwards pass. These constraints are challenging especially for larger models, thus when crashing with an error, torch.nn.parallel.DistributedDataParallel() will log the fully qualified name of all parameters that went unused. For example, in the above application, if we modify loss to be instead computed as loss = output[1], then TwoLinLayerNet.a does not receive a gradient in the backwards pass, and thus results in DDP failing. On a crash, the user is passed information about parameters which went unused, which may be challenging to manually find for large models: RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and by making sure all `forward` function outputs participate in calculating loss. If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return va lue of `forward` of your module when reporting this issue (e.g. list, dict, iterable). Parameters which did not receive grad for rank 0: a.weight Parameter indices which did not receive grad for rank 0: 0 Setting TORCH_DISTRIBUTED_DEBUG=DETAIL will trigger additional consistency and synchronization checks on every collective call issued by the user either directly or indirectly (such as DDP allreduce). This is done by creating a wrapper process group that wraps all process groups returned by torch.distributed.init_process_group() and torch.distributed.new_group() APIs. As a result, these APIs will return a wrapper process group that can be used exactly like a regular process group, but performs consistency checks before dispatching the collective to an underlying process group. Currently, these checks include a torch.distributed.monitored_barrier(), which ensures all ranks complete their outstanding collective calls and reports ranks which are stuck. Next, the collective itself is checked for consistency by ensuring all collective functions match and are called with consistent tensor shapes. If this is not the case, a detailed error report is included when the application crashes, rather than a hang or uninformative error message. As an example, consider the following function which has mismatched input shapes into torch.distributed.all_reduce(): import torch import torch.distributed as dist import torch.multiprocessing as mp def worker(rank): dist.init_process_group("nccl", rank=rank, world_size=2) torch.cuda.set_device(rank) tensor = torch.randn(10 if rank == 0 else 20).cuda() dist.all_reduce(tensor) torch.cuda.synchronize(device=rank) if __name__ == "__main__": os.environ["MASTER_ADDR"] = "localhost" os.environ["MASTER_PORT"] = "29501" os.environ["TORCH_CPP_LOG_LEVEL"]="INFO" os.environ["TORCH_DISTRIBUTED_DEBUG"] = "DETAIL" mp.spawn(worker, nprocs=2, args=()) With the NCCL backend, such an application would likely result in a hang which can be challenging to root-cause in nontrivial scenarios. If the user enables TORCH_DISTRIBUTED_DEBUG=DETAIL and reruns the application, the following error message reveals the root cause: work = default_pg.allreduce([tensor], opts) RuntimeError: Error when verifying shape tensors for collective ALLREDUCE on rank 0. This likely indicates that input shapes into the collective are mismatched across ranks. Got shapes: 10 [ torch.LongTensor{1} ] Note For fine-grained control of the debug level during runtime the functions torch.distributed.set_debug_level(), torch.distributed.set_debug_level_from_env(), and torch.distributed.get_debug_level() can also be used. In addition, TORCH_DISTRIBUTED_DEBUG=DETAIL can be used in conjunction with TORCH_SHOW_CPP_STACKTRACES=1 to log the entire callstack when a collective desynchronization is detected. These collective desynchronization checks will work for all applications that use c10d collective calls backed by process groups created with the torch.distributed.init_process_group() and torch.distributed.new_group() APIs. Logging# In addition to explicit debugging support via torch.distributed.monitored_barrier() and TORCH_DISTRIBUTED_DEBUG, the underlying C++ library of torch.distributed also outputs log messages at various levels. These messages can be helpful to understand the execution state of a distributed training job and to troubleshoot problems such as network connection failures. The following matrix shows how the log level can be adjusted via the combination of TORCH_CPP_LOG_LEVEL and TORCH_DISTRIBUTED_DEBUG environment variables. TORCH_CPP_LOG_LEVEL TORCH_DISTRIBUTED_DEBUG Effective Log Level ERROR ignored Error WARNING ignored Warning INFO ignored Info INFO INFO Debug INFO DETAIL Trace (a.k.a. All) Distributed components raise custom Exception types derived from RuntimeError: torch.distributed.DistError: This is the base type of all distributed exceptions. torch.distributed.DistBackendError: This exception is thrown when a backend-specific error occurs. For example, if the NCCL backend is used and the user attempts to use a GPU that is not available to the NCCL library. torch.distributed.DistNetworkError: This exception is thrown when networking libraries encounter errors (ex: Connection reset by peer) torch.distributed.DistStoreError: This exception is thrown when the Store encounters an error (ex: TCPStore timeout) class torch.distributed.DistError# Exception raised when an error occurs in the distributed library class torch.distributed.DistBackendError# Exception raised when a backend error occurs in distributed class torch.distributed.DistNetworkError# Exception raised when a network error occurs in distributed class torch.distributed.DistStoreError# Exception raised when an error occurs in the distributed store If you are running single node training, it may be convenient to interactively breakpoint your script. We offer a way to conveniently breakpoint a single rank: torch.distributed.breakpoint(rank=0, skip=0, timeout_s=3600)[source]# Set a breakpoint, but only on a single rank. All other ranks will wait for you to be done with the breakpoint before continuing. Parameters rank (int) – Which rank to break on. Default: 0 skip (int) – Skip the first skip calls to this breakpoint. Default: 0.
35
-
36
- ```
37
- torch.distributed
38
- ```
39
-
40
- **Pattern 3:** Initialization# The package needs to be initialized using the torch.distributed.init_process_group() or torch.distributed.device_mesh.init_device_mesh() function before calling any other methods. Both block until all processes have joined. Warning Initialization is not thread-safe. Process group creation should be performed from a single thread, to prevent inconsistent ‘UUID’ assignment across ranks, and to prevent races during initialization that can lead to hangs. torch.distributed.is_available()[source]# Return True if the distributed package is available. Otherwise, torch.distributed does not expose any other APIs. Currently, torch.distributed is available on Linux, MacOS and Windows. Set USE_DISTRIBUTED=1 to enable it when building PyTorch from source. Currently, the default value is USE_DISTRIBUTED=1 for Linux and Windows, USE_DISTRIBUTED=0 for MacOS. Return type bool torch.distributed.init_process_group(backend=None, init_method=None, timeout=None, world_size=-1, rank=-1, store=None, group_name='', pg_options=None, device_id=None)[source]# Initialize the default distributed process group. This will also initialize the distributed package. There are 2 main ways to initialize a process group: Specify store, rank, and world_size explicitly. Specify init_method (a URL string) which indicates where/how to discover peers. Optionally specify rank and world_size, or encode all required parameters in the URL and omit them. If neither is specified, init_method is assumed to be “env://”. Parameters backend (str or Backend, optional) – The backend to use. Depending on build-time configurations, valid values include mpi, gloo, nccl, ucc, xccl or one that is registered by a third-party plugin. Since 2.6, if backend is not provided, c10d will use a backend registered for the device type indicated by the device_id kwarg (if provided). The known default registrations today are: nccl for cuda, gloo for cpu, xccl for xpu. If neither backend nor device_id is provided, c10d will detect the accelerator on the run-time machine and use a backend registered for that detected accelerator (or cpu). This field can be given as a lowercase string (e.g., "gloo"), which can also be accessed via Backend attributes (e.g., Backend.GLOO). If using multiple processes per machine with nccl backend, each process must have exclusive access to every GPU it uses, as sharing GPUs between processes can result in deadlock or NCCL invalid usage. ucc backend is experimental. Default backend for the device can be queried with get_default_backend_for_device(). init_method (str, optional) – URL specifying how to initialize the process group. Default is “env://” if no init_method or store is specified. Mutually exclusive with store. world_size (int, optional) – Number of processes participating in the job. Required if store is specified. rank (int, optional) – Rank of the current process (it should be a number between 0 and world_size-1). Required if store is specified. store (Store, optional) – Key/value store accessible to all workers, used to exchange connection/address information. Mutually exclusive with init_method. timeout (timedelta, optional) – Timeout for operations executed against the process group. Default value is 10 minutes for NCCL and 30 minutes for other backends. This is the duration after which collectives will be aborted asynchronously and the process will crash. This is done since CUDA execution is async and it is no longer safe to continue executing user code since failed async NCCL operations might result in subsequent CUDA operations running on corrupted data. When TORCH_NCCL_BLOCKING_WAIT is set, the process will block and wait for this timeout. group_name (str, optional, deprecated) – Group name. This argument is ignored pg_options (ProcessGroupOptions, optional) – process group options specifying what additional options need to be passed in during the construction of specific process groups. As of now, the only options we support is ProcessGroupNCCL.Options for the nccl backend, is_high_priority_stream can be specified so that the nccl backend can pick up high priority cuda streams when there’re compute kernels waiting. For other available options to config nccl, See https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/api/types.html#ncclconfig-t device_id (torch.device | int, optional) – a single, specific device this process will work on, allowing for backend-specific optimizations. Currently this has two effects, only under NCCL: the communicator is immediately formed (calling ncclCommInit* immediately rather than the normal lazy call) and sub-groups will use ncclCommSplit when possible to avoid unnecessary overhead of group creation. If you want to know NCCL initialization error early, you can also use this field. If an int is provided, the API assumes that the accelerator type at compile time will be used. Note To enable backend == Backend.MPI, PyTorch needs to be built from source on a system that supports MPI. Note Support for multiple backends is experimental. Currently when no backend is specified, both gloo and nccl backends will be created. The gloo backend will be used for collectives with CPU tensors and the nccl backend will be used for collectives with CUDA tensors. A custom backend can be specified by passing in a string with format “<device_type>:<backend_name>,<device_type>:<backend_name>”, e.g. “cpu:gloo,cuda:custom_backend”. torch.distributed.device_mesh.init_device_mesh(device_type, mesh_shape, *, mesh_dim_names=None, backend_override=None)[source]# Initializes a DeviceMesh based on device_type, mesh_shape, and mesh_dim_names parameters. This creates a DeviceMesh with an n-dimensional array layout, where n is the length of mesh_shape. If mesh_dim_names is provided, each dimension is labeled as mesh_dim_names[i]. Note init_device_mesh follows SPMD programming model, meaning the same PyTorch Python program runs on all processes/ranks in the cluster. Ensure mesh_shape (the dimensions of the nD array describing device layout) is identical across all ranks. Inconsistent mesh_shape may lead to hanging. Note If no process group is found, init_device_mesh will initialize distributed process group/groups required for distributed communications behind the scene. Parameters device_type (str) – The device type of the mesh. Currently supports: “cpu”, “cuda/cuda-like”, “xpu”. Passing in a device type with a GPU index, such as “cuda:0”, is not allowed. mesh_shape (Tuple[int]) – A tuple defining the dimensions of the multi-dimensional array describing the layout of devices. mesh_dim_names (Tuple[str], optional) – A tuple of mesh dimension names to assign to each dimension of the multi-dimensional array describing the layout of devices. Its length must match the length of mesh_shape. Each string in mesh_dim_names must be unique. backend_override (Dict[int | str, tuple[str, Options] | str | Options], optional) – Overrides for some or all of the ProcessGroups that will be created for each mesh dimension. Each key can be either the index of a dimension or its name (if mesh_dim_names is provided). Each value can be a tuple containing the name of the backend and its options, or just one of these two components (in which case the other will be set to its default value). Returns A DeviceMesh object representing the device layout. Return type DeviceMesh Example: >>> from torch.distributed.device_mesh import init_device_mesh >>> >>> mesh_1d = init_device_mesh("cuda", mesh_shape=(8,)) >>> mesh_2d = init_device_mesh("cuda", mesh_shape=(2, 8), mesh_dim_names=("dp", "tp")) torch.distributed.is_initialized()[source]# Check if the default process group has been initialized. Return type bool torch.distributed.is_mpi_available()[source]# Check if the MPI backend is available. Return type bool torch.distributed.is_nccl_available()[source]# Check if the NCCL backend is available. Return type bool torch.distributed.is_gloo_available()[source]# Check if the Gloo backend is available. Return type bool torch.distributed.distributed_c10d.is_xccl_available()[source]# Check if the XCCL backend is available. Return type bool torch.distributed.is_torchelastic_launched()[source]# Check whether this process was launched with torch.distributed.elastic (aka torchelastic). The existence of TORCHELASTIC_RUN_ID environment variable is used as a proxy to determine whether the current process was launched with torchelastic. This is a reasonable proxy since TORCHELASTIC_RUN_ID maps to the rendezvous id which is always a non-null value indicating the job id for peer discovery purposes.. Return type bool torch.distributed.get_default_backend_for_device(device)[source]# Return the default backend for the given device. Parameters device (Union[str, torch.device]) – The device to get the default backend for. Returns The default backend for the given device as a lower case string. Return type str Currently three initialization methods are supported: TCP initialization# There are two ways to initialize using TCP, both requiring a network address reachable from all processes and a desired world_size. The first way requires specifying an address that belongs to the rank 0 process. This initialization method requires that all processes have manually specified ranks. Note that multicast address is not supported anymore in the latest distributed package. group_name is deprecated as well. import torch.distributed as dist # Use address of one of the machines dist.init_process_group(backend, init_method='tcp://10.1.1.20:23456', rank=args.rank, world_size=4) Shared file-system initialization# Another initialization method makes use of a file system that is shared and visible from all machines in a group, along with a desired world_size. The URL should start with file:// and contain a path to a non-existent file (in an existing directory) on a shared file system. File-system initialization will automatically create that file if it doesn’t exist, but will not delete the file. Therefore, it is your responsibility to make sure that the file is cleaned up before the next init_process_group() call on the same file path/name. Note that automatic rank assignment is not supported anymore in the latest distributed package and group_name is deprecated as well. Warning This method assumes that the file system supports locking using fcntl - most local systems and NFS support it. Warning This method will always create the file and try its best to clean up and remove the file at the end of the program. In other words, each initialization with the file init method will need a brand new empty file in order for the initialization to succeed. If the same file used by the previous initialization (which happens not to get cleaned up) is used again, this is unexpected behavior and can often cause deadlocks and failures. Therefore, even though this method will try its best to clean up the file, if the auto-delete happens to be unsuccessful, it is your responsibility to ensure that the file is removed at the end of the training to prevent the same file to be reused again during the next time. This is especially important if you plan to call init_process_group() multiple times on the same file name. In other words, if the file is not removed/cleaned up and you call init_process_group() again on that file, failures are expected. The rule of thumb here is that, make sure that the file is non-existent or empty every time init_process_group() is called. import torch.distributed as dist # rank should always be specified dist.init_process_group(backend, init_method='file:///mnt/nfs/sharedfile', world_size=4, rank=args.rank) Environment variable initialization# This method will read the configuration from environment variables, allowing one to fully customize how the information is obtained. The variables to be set are: MASTER_PORT - required; has to be a free port on machine with rank 0 MASTER_ADDR - required (except for rank 0); address of rank 0 node WORLD_SIZE - required; can be set either here, or in a call to init function RANK - required; can be set either here, or in a call to init function The machine with rank 0 will be used to set up all connections. This is the default method, meaning that init_method does not have to be specified (or can be env://). Improving initialization time# TORCH_GLOO_LAZY_INIT - establishes connections on demand rather than using a full mesh which can greatly improve initialization time for non all2all operations.
41
-
42
- ```
43
- torch.distributed.init_process_group()
44
- ```
45
-
46
- **Pattern 4:** Example:
47
-
48
- ```
49
- >>> from torch.distributed.device_mesh import init_device_mesh
50
- >>>
51
- >>> mesh_1d = init_device_mesh("cuda", mesh_shape=(8,))
52
- >>> mesh_2d = init_device_mesh("cuda", mesh_shape=(2, 8), mesh_dim_names=("dp", "tp"))
53
- ```
54
-
55
- **Pattern 5:** Groups# By default collectives operate on the default group (also called the world) and require all processes to enter the distributed function call. However, some workloads can benefit from more fine-grained communication. This is where distributed groups come into play. new_group() function can be used to create new groups, with arbitrary subsets of all processes. It returns an opaque group handle that can be given as a group argument to all collectives (collectives are distributed functions to exchange information in certain well-known programming patterns). torch.distributed.new_group(ranks=None, timeout=None, backend=None, pg_options=None, use_local_synchronization=False, group_desc=None, device_id=None)[source]# Create a new distributed group. This function requires that all processes in the main group (i.e. all processes that are part of the distributed job) enter this function, even if they are not going to be members of the group. Additionally, groups should be created in the same order in all processes. Warning Safe concurrent usage: When using multiple process groups with the NCCL backend, the user must ensure a globally consistent execution order of collectives across ranks. If multiple threads within a process issue collectives, explicit synchronization is necessary to ensure consistent ordering. When using async variants of torch.distributed communication APIs, a work object is returned and the communication kernel is enqueued on a separate CUDA stream, allowing overlap of communication and computation. Once one or more async ops have been issued on one process group, they must be synchronized with other cuda streams by calling work.wait() before using another process group. See Using multiple NCCL communicators concurrently <https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/usage/communicators.html#using-multiple-nccl-communicators-concurrently> for more details. Parameters ranks (list[int]) – List of ranks of group members. If None, will be set to all ranks. Default is None. timeout (timedelta, optional) – see init_process_group for details and default value. backend (str or Backend, optional) – The backend to use. Depending on build-time configurations, valid values are gloo and nccl. By default uses the same backend as the global group. This field should be given as a lowercase string (e.g., "gloo"), which can also be accessed via Backend attributes (e.g., Backend.GLOO). If None is passed in, the backend corresponding to the default process group will be used. Default is None. pg_options (ProcessGroupOptions, optional) – process group options specifying what additional options need to be passed in during the construction of specific process groups. i.e. for the nccl backend, is_high_priority_stream can be specified so that process group can pick up high priority cuda streams. For other available options to config nccl, See https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/api/types.html#ncclconfig-tuse_local_synchronization (bool, optional): perform a group-local barrier at the end of the process group creation. This is different in that non-member ranks don’t need to call into API and don’t join the barrier. group_desc (str, optional) – a string to describe the process group. device_id (torch.device, optional) – a single, specific device to “bind” this process to, The new_group call will try to initialize a communication backend immediately for the device if this field is given. Returns A handle of distributed group that can be given to collective calls or GroupMember.NON_GROUP_MEMBER if the rank is not part of ranks. N.B. use_local_synchronization doesn’t work with MPI. N.B. While use_local_synchronization=True can be significantly faster with larger clusters and small process groups, care must be taken since it changes cluster behavior as non-member ranks don’t join the group barrier(). N.B. use_local_synchronization=True can lead to deadlocks when each rank creates multiple overlapping process groups. To avoid that, make sure all ranks follow the same global creation order. torch.distributed.get_group_rank(group, global_rank)[source]# Translate a global rank into a group rank. global_rank must be part of group otherwise this raises RuntimeError. Parameters group (ProcessGroup) – ProcessGroup to find the relative rank. global_rank (int) – Global rank to query. Returns Group rank of global_rank relative to group Return type int N.B. calling this function on the default process group returns identity torch.distributed.get_global_rank(group, group_rank)[source]# Translate a group rank into a global rank. group_rank must be part of group otherwise this raises RuntimeError. Parameters group (ProcessGroup) – ProcessGroup to find the global rank from. group_rank (int) – Group rank to query. Returns Global rank of group_rank relative to group Return type int N.B. calling this function on the default process group returns identity torch.distributed.get_process_group_ranks(group)[source]# Get all ranks associated with group. Parameters group (Optional[ProcessGroup]) – ProcessGroup to get all ranks from. If None, the default process group will be used. Returns List of global ranks ordered by group rank. Return type list[int]
56
-
57
- ```
58
- new_group()
59
- ```
60
-
61
- **Pattern 6:** Warning Safe concurrent usage: When using multiple process groups with the NCCL backend, the user must ensure a globally consistent execution order of collectives across ranks. If multiple threads within a process issue collectives, explicit synchronization is necessary to ensure consistent ordering. When using async variants of torch.distributed communication APIs, a work object is returned and the communication kernel is enqueued on a separate CUDA stream, allowing overlap of communication and computation. Once one or more async ops have been issued on one process group, they must be synchronized with other cuda streams by calling work.wait() before using another process group. See Using multiple NCCL communicators concurrently <https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/usage/communicators.html#using-multiple-nccl-communicators-concurrently> for more details.
62
-
63
- ```
64
- NCCL
65
- ```
66
-
67
- **Pattern 7:** Note If you are using DistributedDataParallel in conjunction with the Distributed RPC Framework, you should always use torch.distributed.autograd.backward() to compute gradients and torch.distributed.optim.DistributedOptimizer for optimizing parameters. Example: >>> import torch.distributed.autograd as dist_autograd >>> from torch.nn.parallel import DistributedDataParallel as DDP >>> import torch >>> from torch import optim >>> from torch.distributed.optim import DistributedOptimizer >>> import torch.distributed.rpc as rpc >>> from torch.distributed.rpc import RRef >>> >>> t1 = torch.rand((3, 3), requires_grad=True) >>> t2 = torch.rand((3, 3), requires_grad=True) >>> rref = rpc.remote("worker1", torch.add, args=(t1, t2)) >>> ddp_model = DDP(my_model) >>> >>> # Setup optimizer >>> optimizer_params = [rref] >>> for param in ddp_model.parameters(): >>> optimizer_params.append(RRef(param)) >>> >>> dist_optim = DistributedOptimizer( >>> optim.SGD, >>> optimizer_params, >>> lr=0.05, >>> ) >>> >>> with dist_autograd.context() as context_id: >>> pred = ddp_model(rref.to_here()) >>> loss = loss_func(pred, target) >>> dist_autograd.backward(context_id, [loss]) >>> dist_optim.step(context_id)
68
-
69
- ```
70
- torch.distributed.autograd.backward()
71
- ```
72
-
73
- **Pattern 8:** static_graph (bool) – When set to True, DDP knows the trained graph is static. Static graph means 1) The set of used and unused parameters will not change during the whole training loop; in this case, it does not matter whether users set find_unused_parameters = True or not. 2) How the graph is trained will not change during the whole training loop (meaning there is no control flow depending on iterations). When static_graph is set to be True, DDP will support cases that can not be supported in the past: 1) Reentrant backwards. 2) Activation checkpointing multiple times. 3) Activation checkpointing when model has unused parameters. 4) There are model parameters that are outside of forward function. 5) Potentially improve performance when there are unused parameters, as DDP will not search graph in each iteration to detect unused parameters when static_graph is set to be True. To check whether you can set static_graph to be True, one way is to check ddp logging data at the end of your previous model training, if ddp_logging_data.get("can_set_static_graph") == True, mostly you can set static_graph = True as well. Example::>>> model_DDP = torch.nn.parallel.DistributedDataParallel(model) >>> # Training loop >>> ... >>> ddp_logging_data = model_DDP._get_ddp_logging_data() >>> static_graph = ddp_logging_data.get("can_set_static_graph")
74
-
75
- ```
76
- True
77
- ```
78
-
79
- ## Reference Files
80
-
81
- This skill includes comprehensive documentation in `references/`:
82
-
83
- - **other.md** - Other documentation
84
-
85
- Use `view` to read specific reference files when detailed information is needed.
86
-
87
- ## Working with This Skill
88
-
89
- ### For Beginners
90
- Start with the getting_started or tutorials reference files for foundational concepts.
91
-
92
- ### For Specific Features
93
- Use the appropriate category reference file (api, guides, etc.) for detailed information.
94
-
95
- ### For Code Examples
96
- The quick reference section above contains common patterns extracted from the official docs.
97
-
98
- ## Resources
99
-
100
- ### references/
101
- Organized documentation extracted from official sources. These files contain:
102
- - Detailed explanations
103
- - Code examples with language annotations
104
- - Links to original documentation
105
- - Table of contents for quick navigation
106
-
107
- ### scripts/
108
- Add helper scripts here for common automation tasks.
109
-
110
- ### assets/
111
- Add templates, boilerplate, or example projects here.
112
-
113
- ## Notes
114
-
115
- - This skill was automatically generated from official documentation
116
- - Reference files preserve the structure and examples from source docs
117
- - Code examples include language detection for better syntax highlighting
118
- - Quick reference patterns are extracted from common usage examples in the docs
119
-
120
- ## Updating
121
-
122
- To refresh this skill with updated documentation:
123
- 1. Re-run the scraper with the same configuration
124
- 2. The skill will be rebuilt with the latest information
125
-
126
-
@@ -1,7 +0,0 @@
1
- # Pytorch-Fsdp Documentation Index
2
-
3
- ## Categories
4
-
5
- ### Other
6
- **File:** `other.md`
7
- **Pages:** 15