survival 1.1.36__cp314-cp314-macosx_10_12_x86_64.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,678 @@
1
+ Metadata-Version: 2.4
2
+ Name: survival
3
+ Version: 1.1.36
4
+ Classifier: Development Status :: 5 - Production/Stable
5
+ Classifier: Intended Audience :: Science/Research
6
+ Classifier: Intended Audience :: Healthcare Industry
7
+ Classifier: License :: OSI Approved :: MIT License
8
+ Classifier: Operating System :: POSIX :: Linux
9
+ Classifier: Operating System :: MacOS
10
+ Classifier: Operating System :: Microsoft :: Windows
11
+ Classifier: Programming Language :: Python :: 3
12
+ Classifier: Programming Language :: Rust
13
+ Classifier: Topic :: Scientific/Engineering :: Medical Science Apps.
14
+ Classifier: Topic :: Scientific/Engineering :: Mathematics
15
+ Classifier: Typing :: Typed
16
+ Requires-Dist: numpy>=1.20.0
17
+ Requires-Dist: pre-commit==4.5.1 ; extra == 'dev'
18
+ Requires-Dist: pytest==9.0.2 ; extra == 'dev'
19
+ Requires-Dist: numpy==2.4.1 ; extra == 'dev'
20
+ Requires-Dist: scikit-learn>=1.0.0 ; extra == 'sklearn'
21
+ Requires-Dist: pytest==9.0.2 ; extra == 'test'
22
+ Requires-Dist: numpy==2.4.1 ; extra == 'test'
23
+ Requires-Dist: pandas==2.3.3 ; extra == 'test'
24
+ Requires-Dist: polars==1.37.1 ; extra == 'test'
25
+ Provides-Extra: dev
26
+ Provides-Extra: sklearn
27
+ Provides-Extra: test
28
+ License-File: LICENSE
29
+ Summary: A high-performance survival analysis library written in Rust with Python bindings
30
+ Keywords: survival-analysis,kaplan-meier,cox-regression,statistics,biostatistics,rust
31
+ Author-email: Cameron Lyons <cameron.lyons2@gmail.com>
32
+ Maintainer-email: Cameron Lyons <cameron.lyons2@gmail.com>
33
+ License: MIT
34
+ Requires-Python: >=3.11
35
+ Description-Content-Type: text/markdown; charset=UTF-8; variant=GFM
36
+ Project-URL: Documentation, https://github.com/Cameron-Lyons/survival#readme
37
+ Project-URL: Issues, https://github.com/Cameron-Lyons/survival/issues
38
+ Project-URL: Repository, https://github.com/Cameron-Lyons/survival
39
+
40
+ # survival
41
+
42
+ [![Crates.io](https://img.shields.io/crates/v/survival.svg)](https://crates.io/crates/survival)
43
+ [![PyPI version](https://img.shields.io/pypi/v/survival.svg)](https://pypi.org/project/survival/)
44
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
45
+
46
+ A high-performance survival analysis library written in Rust, with a Python API powered by [PyO3](https://github.com/PyO3/pyo3) and [maturin](https://github.com/PyO3/maturin).
47
+
48
+ ## Features
49
+
50
+ - Core survival analysis routines
51
+ - Cox proportional hazards models with frailty
52
+ - Kaplan-Meier and Aalen-Johansen (multi-state) survival curves
53
+ - Nelson-Aalen estimator
54
+ - Parametric accelerated failure time models
55
+ - Fine-Gray competing risks model
56
+ - Penalized splines (P-splines) for smooth covariate effects
57
+ - Concordance index calculations
58
+ - Person-years calculations
59
+ - Score calculations for survival models
60
+ - Residual analysis (martingale, Schoenfeld, score residuals)
61
+ - Bootstrap confidence intervals
62
+ - Cross-validation for model assessment
63
+ - Statistical tests (log-rank, likelihood ratio, Wald, score, proportional hazards)
64
+ - Sample size and power calculations
65
+ - RMST (Restricted Mean Survival Time) analysis
66
+ - Landmark analysis
67
+ - Calibration and risk stratification
68
+ - Time-dependent AUC
69
+ - Conditional logistic regression
70
+ - Time-splitting utilities
71
+
72
+ ## Installation
73
+
74
+ ### From PyPI (Recommended)
75
+
76
+ ```sh
77
+ pip install survival
78
+ ```
79
+
80
+ ### From Source
81
+
82
+ #### Prerequisites
83
+
84
+ - Python 3.10+
85
+ - Rust (see [rustup.rs](https://rustup.rs/))
86
+ - [maturin](https://github.com/PyO3/maturin)
87
+
88
+ Install maturin:
89
+ ```sh
90
+ pip install maturin
91
+ ```
92
+
93
+ #### Build and Install
94
+
95
+ Build the Python wheel:
96
+ ```sh
97
+ maturin build --release
98
+ ```
99
+
100
+ Install the wheel:
101
+ ```sh
102
+ pip install target/wheels/survival-*.whl
103
+ ```
104
+
105
+ For development:
106
+ ```sh
107
+ maturin develop
108
+ ```
109
+
110
+ ## Usage
111
+
112
+ ### Aalen's Additive Regression Model
113
+
114
+ ```python
115
+ from survival import AaregOptions, aareg
116
+
117
+ data = [
118
+ [1.0, 0.0, 0.5],
119
+ [2.0, 1.0, 1.5],
120
+ [3.0, 0.0, 2.5],
121
+ ]
122
+ variable_names = ["time", "event", "covariate1"]
123
+
124
+ # Create options with required parameters (formula, data, variable_names)
125
+ options = AaregOptions(
126
+ formula="time + event ~ covariate1",
127
+ data=data,
128
+ variable_names=variable_names,
129
+ )
130
+
131
+ # Optional: modify default values via setters
132
+ # options.weights = [1.0, 1.0, 1.0]
133
+ # options.qrtol = 1e-8
134
+ # options.dfbeta = True
135
+
136
+ result = aareg(options)
137
+ print(result)
138
+ ```
139
+
140
+ ### Penalized Splines (P-splines)
141
+
142
+ ```python
143
+ from survival import PSpline
144
+
145
+ x = [0.1 * i for i in range(100)]
146
+ pspline = PSpline(
147
+ x=x,
148
+ df=10,
149
+ theta=1.0,
150
+ eps=1e-6,
151
+ method="GCV",
152
+ boundary_knots=(0.0, 10.0),
153
+ intercept=True,
154
+ penalty=True,
155
+ )
156
+ pspline.fit()
157
+ ```
158
+
159
+ ### Concordance Index
160
+
161
+ ```python
162
+ from survival import perform_concordance1_calculation
163
+
164
+ time_data = [1.0, 2.0, 3.0, 4.0, 5.0, 1.0, 2.0, 3.0, 4.0, 5.0]
165
+ weights = [1.0, 1.0, 1.0, 1.0, 1.0]
166
+ indices = [0, 1, 2, 3, 4]
167
+ ntree = 5
168
+
169
+ result = perform_concordance1_calculation(time_data, weights, indices, ntree)
170
+ print(f"Concordance index: {result['concordance_index']}")
171
+ ```
172
+
173
+ ### Cox Regression with Frailty
174
+
175
+ ```python
176
+ from survival import perform_cox_regression_frailty
177
+
178
+ result = perform_cox_regression_frailty(
179
+ time_data=[...],
180
+ status_data=[...],
181
+ covariates=[...],
182
+ # ... other parameters
183
+ )
184
+ ```
185
+
186
+ ### Person-Years Calculation
187
+
188
+ ```python
189
+ from survival import perform_pyears_calculation
190
+
191
+ result = perform_pyears_calculation(
192
+ time_data=[...],
193
+ weights=[...],
194
+ # ... other parameters
195
+ )
196
+ ```
197
+
198
+ ### Kaplan-Meier Survival Curves
199
+
200
+ ```python
201
+ from survival import survfitkm, SurvFitKMOutput
202
+
203
+ # Example survival data
204
+ time = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0]
205
+ status = [1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0] # 1 = event, 0 = censored
206
+ weights = [1.0] * len(time) # Optional: equal weights
207
+
208
+ result = survfitkm(
209
+ time=time,
210
+ status=status,
211
+ weights=weights,
212
+ entry_times=None, # Optional: entry times for left-truncation
213
+ position=None, # Optional: position flags
214
+ reverse=False, # Optional: reverse time order
215
+ computation_type=0 # Optional: computation type
216
+ )
217
+
218
+ print(f"Time points: {result.time}")
219
+ print(f"Survival estimates: {result.estimate}")
220
+ print(f"Standard errors: {result.std_err}")
221
+ print(f"Number at risk: {result.n_risk}")
222
+ ```
223
+
224
+ ### Fine-Gray Competing Risks Model
225
+
226
+ ```python
227
+ from survival import finegray, FineGrayOutput
228
+
229
+ # Example competing risks data
230
+ tstart = [0.0, 0.0, 0.0, 0.0]
231
+ tstop = [1.0, 2.0, 3.0, 4.0]
232
+ ctime = [0.5, 1.5, 2.5, 3.5] # Cut points
233
+ cprob = [0.1, 0.2, 0.3, 0.4] # Cumulative probabilities
234
+ extend = [True, True, False, False] # Whether to extend intervals
235
+ keep = [True, True, True, True] # Which cut points to keep
236
+
237
+ result = finegray(
238
+ tstart=tstart,
239
+ tstop=tstop,
240
+ ctime=ctime,
241
+ cprob=cprob,
242
+ extend=extend,
243
+ keep=keep
244
+ )
245
+
246
+ print(f"Row indices: {result.row}")
247
+ print(f"Start times: {result.start}")
248
+ print(f"End times: {result.end}")
249
+ print(f"Weights: {result.wt}")
250
+ ```
251
+
252
+ ### Parametric Survival Regression (Accelerated Failure Time Models)
253
+
254
+ ```python
255
+ from survival import survreg, SurvivalFit, DistributionType
256
+
257
+ # Example survival data
258
+ time = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0]
259
+ status = [1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0] # 1 = event, 0 = censored
260
+ covariates = [
261
+ [1.0, 2.0],
262
+ [1.5, 2.5],
263
+ [2.0, 3.0],
264
+ [2.5, 3.5],
265
+ [3.0, 4.0],
266
+ [3.5, 4.5],
267
+ [4.0, 5.0],
268
+ [4.5, 5.5],
269
+ ]
270
+
271
+ # Fit parametric survival model
272
+ result = survreg(
273
+ time=time,
274
+ status=status,
275
+ covariates=covariates,
276
+ weights=None, # Optional: observation weights
277
+ offsets=None, # Optional: offset values
278
+ initial_beta=None, # Optional: initial coefficient values
279
+ strata=None, # Optional: stratification variable
280
+ distribution="weibull", # "extreme_value", "logistic", "gaussian", "weibull", or "lognormal"
281
+ max_iter=20, # Optional: maximum iterations
282
+ eps=1e-5, # Optional: convergence tolerance
283
+ tol_chol=1e-9, # Optional: Cholesky tolerance
284
+ )
285
+
286
+ print(f"Coefficients: {result.coefficients}")
287
+ print(f"Log-likelihood: {result.log_likelihood}")
288
+ print(f"Iterations: {result.iterations}")
289
+ print(f"Variance matrix: {result.variance_matrix}")
290
+ print(f"Convergence flag: {result.convergence_flag}")
291
+ ```
292
+
293
+ ### Cox Proportional Hazards Model
294
+
295
+ ```python
296
+ from survival import CoxPHModel, Subject
297
+
298
+ # Create a Cox PH model
299
+ model = CoxPHModel()
300
+
301
+ # Or create with data
302
+ covariates = [[1.0, 2.0], [2.0, 3.0], [1.5, 2.5]]
303
+ event_times = [1.0, 2.0, 3.0]
304
+ censoring = [1, 1, 0] # 1 = event, 0 = censored
305
+
306
+ model = CoxPHModel.new_with_data(covariates, event_times, censoring)
307
+
308
+ # Fit the model
309
+ model.fit(n_iters=10)
310
+
311
+ # Get results
312
+ print(f"Baseline hazard: {model.baseline_hazard}")
313
+ print(f"Risk scores: {model.risk_scores}")
314
+ print(f"Coefficients: {model.get_coefficients()}")
315
+
316
+ # Predict on new data
317
+ new_covariates = [[1.0, 2.0], [2.0, 3.0]]
318
+ predictions = model.predict(new_covariates)
319
+ print(f"Predictions: {predictions}")
320
+
321
+ # Calculate Brier score
322
+ brier = model.brier_score()
323
+ print(f"Brier score: {brier}")
324
+
325
+ # Compute survival curves for new covariates
326
+ new_covariates = [[1.0, 2.0], [2.0, 3.0]]
327
+ time_points = [0.0, 1.0, 2.0, 3.0, 4.0, 5.0] # Optional: specific time points
328
+ times, survival_curves = model.survival_curve(new_covariates, time_points)
329
+ print(f"Time points: {times}")
330
+ print(f"Survival curves: {survival_curves}") # One curve per covariate set
331
+
332
+ # Create and add subjects
333
+ subject = Subject(
334
+ id=1,
335
+ covariates=[1.0, 2.0],
336
+ is_case=True,
337
+ is_subcohort=True,
338
+ stratum=0
339
+ )
340
+ model.add_subject(subject)
341
+ ```
342
+
343
+ ### Cox Martingale Residuals
344
+
345
+ ```python
346
+ from survival import coxmart
347
+
348
+ # Example survival data
349
+ time = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0]
350
+ status = [1, 1, 0, 1, 0, 1, 1, 0] # 1 = event, 0 = censored
351
+ score = [0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2] # Risk scores
352
+
353
+ # Calculate martingale residuals
354
+ residuals = coxmart(
355
+ time=time,
356
+ status=status,
357
+ score=score,
358
+ weights=None, # Optional: observation weights
359
+ strata=None, # Optional: stratification variable
360
+ method=0, # Optional: method (0 = Breslow, 1 = Efron)
361
+ )
362
+
363
+ print(f"Martingale residuals: {residuals}")
364
+ ```
365
+
366
+ ### Survival Difference Tests (Log-Rank Test)
367
+
368
+ ```python
369
+ from survival import survdiff2, SurvDiffResult
370
+
371
+ # Example: Compare survival between two groups
372
+ time = [1.0, 2.0, 3.0, 4.0, 5.0, 1.5, 2.5, 3.5, 4.5, 5.5]
373
+ status = [1, 1, 0, 1, 0, 1, 1, 1, 0, 1]
374
+ group = [1, 1, 1, 1, 1, 2, 2, 2, 2, 2] # Group 1 and Group 2
375
+
376
+ # Perform log-rank test (rho=0 for standard log-rank)
377
+ result = survdiff2(
378
+ time=time,
379
+ status=status,
380
+ group=group,
381
+ strata=None, # Optional: stratification variable
382
+ rho=0.0, # 0.0 = log-rank, 1.0 = Wilcoxon, other = generalized
383
+ )
384
+
385
+ print(f"Observed events: {result.observed}")
386
+ print(f"Expected events: {result.expected}")
387
+ print(f"Chi-squared statistic: {result.chi_squared}")
388
+ print(f"Degrees of freedom: {result.degrees_of_freedom}")
389
+ print(f"Variance matrix: {result.variance}")
390
+ ```
391
+
392
+ ### Built-in Datasets
393
+
394
+ The library includes 30 classic survival analysis datasets:
395
+
396
+ ```python
397
+ from survival import load_lung, load_aml, load_veteran
398
+
399
+ # Load the lung cancer dataset
400
+ lung = load_lung()
401
+ print(f"Columns: {lung['columns']}")
402
+ print(f"Number of rows: {len(lung['data'])}")
403
+
404
+ # Load the acute myelogenous leukemia dataset
405
+ aml = load_aml()
406
+
407
+ # Load the veteran's lung cancer dataset
408
+ veteran = load_veteran()
409
+ ```
410
+
411
+ **Available datasets:**
412
+ - `load_lung()` - NCCTG Lung Cancer Data
413
+ - `load_aml()` - Acute Myelogenous Leukemia Survival Data
414
+ - `load_veteran()` - Veterans' Administration Lung Cancer Study
415
+ - `load_ovarian()` - Ovarian Cancer Survival Data
416
+ - `load_colon()` - Colon Cancer Data
417
+ - `load_pbc()` - Primary Biliary Cholangitis Data
418
+ - `load_cgd()` - Chronic Granulomatous Disease Data
419
+ - `load_bladder()` - Bladder Cancer Recurrences
420
+ - `load_heart()` - Stanford Heart Transplant Data
421
+ - `load_kidney()` - Kidney Catheter Data
422
+ - `load_rats()` - Rat Treatment Data
423
+ - `load_stanford2()` - Stanford Heart Transplant Data (Extended)
424
+ - `load_udca()` - UDCA Clinical Trial Data
425
+ - `load_myeloid()` - Acute Myeloid Leukemia Clinical Trial
426
+ - `load_flchain()` - Free Light Chain Data
427
+ - `load_transplant()` - Liver Transplant Data
428
+ - `load_mgus()` - Monoclonal Gammopathy Data
429
+ - `load_mgus2()` - Monoclonal Gammopathy Data (Updated)
430
+ - `load_diabetic()` - Diabetic Retinopathy Data
431
+ - `load_retinopathy()` - Retinopathy Data
432
+ - `load_gbsg()` - German Breast Cancer Study Group Data
433
+ - `load_rotterdam()` - Rotterdam Tumor Bank Data
434
+ - `load_logan()` - Logan Unemployment Data
435
+ - `load_nwtco()` - National Wilms Tumor Study Data
436
+ - `load_solder()` - Solder Joint Data
437
+ - `load_tobin()` - Tobin's Tobit Data
438
+ - `load_rats2()` - Rat Tumorigenesis Data
439
+ - `load_nafld()` - Non-Alcoholic Fatty Liver Disease Data
440
+ - `load_cgd0()` - CGD Baseline Data
441
+ - `load_pbcseq()` - PBC Sequential Data
442
+
443
+ ## API Reference
444
+
445
+ ### Classes
446
+
447
+ **Core Models:**
448
+ - `AaregOptions`: Configuration options for Aalen's additive regression model
449
+ - `PSpline`: Penalized spline class for smooth covariate effects
450
+ - `CoxPHModel`: Cox proportional hazards model class
451
+ - `Subject`: Subject data structure for Cox PH models
452
+ - `ConditionalLogisticRegression`: Conditional logistic regression model
453
+ - `ClogitDataSet`: Dataset for conditional logistic regression
454
+
455
+ **Survival Curves:**
456
+ - `SurvFitKMOutput`: Output from Kaplan-Meier survival curve fitting
457
+ - `SurvfitKMOptions`: Options for Kaplan-Meier fitting
458
+ - `KaplanMeierConfig`: Configuration for Kaplan-Meier
459
+ - `SurvFitAJ`: Output from Aalen-Johansen survival curve fitting
460
+ - `NelsonAalenResult`: Output from Nelson-Aalen estimator
461
+ - `StratifiedKMResult`: Output from stratified Kaplan-Meier
462
+
463
+ **Parametric Models:**
464
+ - `SurvivalFit`: Output from parametric survival regression
465
+ - `SurvregConfig`: Configuration for parametric survival regression
466
+ - `DistributionType`: Distribution types for parametric models (extreme_value, logistic, gaussian, weibull, lognormal)
467
+ - `FineGrayOutput`: Output from Fine-Gray competing risks model
468
+
469
+ **Statistical Tests:**
470
+ - `SurvDiffResult`: Output from survival difference tests
471
+ - `LogRankResult`: Output from log-rank test
472
+ - `TrendTestResult`: Output from trend tests
473
+ - `TestResult`: General test result output
474
+ - `ProportionalityTest`: Output from proportional hazards test
475
+ - `SurvObrienResult`: Output from O'Brien transformation
476
+
477
+ **Validation:**
478
+ - `BootstrapResult`: Output from bootstrap confidence interval calculations
479
+ - `CVResult`: Output from cross-validation
480
+ - `CalibrationResult`: Output from calibration analysis
481
+ - `PredictionResult`: Output from prediction functions
482
+ - `RiskStratificationResult`: Output from risk stratification
483
+ - `TdAUCResult`: Output from time-dependent AUC calculation
484
+
485
+ **RMST and Survival Metrics:**
486
+ - `RMSTResult`: Output from RMST calculation
487
+ - `RMSTComparisonResult`: Output from RMST comparison between groups
488
+ - `MedianSurvivalResult`: Output from median survival calculation
489
+ - `CumulativeIncidenceResult`: Output from cumulative incidence calculation
490
+ - `NNTResult`: Number needed to treat result
491
+
492
+ **Landmark Analysis:**
493
+ - `LandmarkResult`: Output from landmark analysis
494
+ - `ConditionalSurvivalResult`: Output from conditional survival calculation
495
+ - `HazardRatioResult`: Output from hazard ratio calculation
496
+ - `SurvivalAtTimeResult`: Output from survival at specific times
497
+ - `LifeTableResult`: Output from life table calculation
498
+
499
+ **Power and Sample Size:**
500
+ - `SampleSizeResult`: Output from sample size calculations
501
+ - `AccrualResult`: Output from accrual calculations
502
+
503
+ **Utilities:**
504
+ - `CoxCountOutput`: Output from Cox counting functions
505
+ - `SplitResult`: Output from time-splitting
506
+ - `CondenseResult`: Output from data condensing
507
+ - `Surv2DataResult`: Output from survival-to-data conversion
508
+ - `TimelineResult`: Output from timeline conversion
509
+ - `IntervalResult`: Output from interval calculations
510
+ - `LinkFunctionParams`: Link function parameters
511
+ - `CchMethod`: Case-cohort method specification
512
+ - `CohortData`: Cohort data structure
513
+
514
+ ### Functions
515
+
516
+ **Model Fitting:**
517
+ - `aareg(options)`: Fit Aalen's additive regression model
518
+ - `survreg(...)`: Fit parametric accelerated failure time models
519
+ - `perform_cox_regression_frailty(...)`: Fit Cox proportional hazards model with frailty
520
+
521
+ **Survival Curves:**
522
+ - `survfitkm(...)`: Fit Kaplan-Meier survival curves
523
+ - `survfitkm_with_options(...)`: Fit Kaplan-Meier with configuration options
524
+ - `survfitaj(...)`: Fit Aalen-Johansen survival curves (multi-state)
525
+ - `nelson_aalen_estimator(...)`: Calculate Nelson-Aalen estimator
526
+ - `stratified_kaplan_meier(...)`: Calculate stratified Kaplan-Meier curves
527
+ - `agsurv4(...)`: Anderson-Gill survival calculations (version 4)
528
+ - `agsurv5(...)`: Anderson-Gill survival calculations (version 5)
529
+
530
+ **Statistical Tests:**
531
+ - `survdiff2(...)`: Perform survival difference tests (log-rank, Wilcoxon, etc.)
532
+ - `logrank_test(...)`: Perform log-rank test
533
+ - `fleming_harrington_test(...)`: Perform Fleming-Harrington weighted test
534
+ - `logrank_trend(...)`: Perform log-rank trend test
535
+ - `lrt_test(...)`: Likelihood ratio test
536
+ - `wald_test_py(...)`: Wald test
537
+ - `score_test_py(...)`: Score test
538
+ - `ph_test(...)`: Proportional hazards assumption test
539
+ - `survobrien(...)`: O'Brien transformation for survival data
540
+
541
+ **Residuals:**
542
+ - `coxmart(...)`: Calculate Cox martingale residuals
543
+ - `agmart(...)`: Calculate Anderson-Gill martingale residuals
544
+ - `schoenfeld_residuals(...)`: Calculate Schoenfeld residuals
545
+ - `cox_score_residuals(...)`: Calculate Cox score residuals
546
+
547
+ **Concordance:**
548
+ - `perform_concordance1_calculation(...)`: Calculate concordance index (version 1)
549
+ - `perform_concordance3_calculation(...)`: Calculate concordance index (version 3)
550
+ - `perform_concordance_calculation(...)`: Calculate concordance index (version 5)
551
+ - `compute_concordance(...)`: General concordance calculation
552
+
553
+ **Validation:**
554
+ - `bootstrap_cox_ci(...)`: Bootstrap confidence intervals for Cox models
555
+ - `bootstrap_survreg_ci(...)`: Bootstrap confidence intervals for parametric models
556
+ - `cv_cox_concordance(...)`: Cross-validation for Cox model concordance
557
+ - `cv_survreg_loglik(...)`: Cross-validation for parametric model log-likelihood
558
+ - `calibration(...)`: Model calibration assessment
559
+ - `predict_cox(...)`: Predictions from Cox models
560
+ - `risk_stratification(...)`: Risk group stratification
561
+ - `td_auc(...)`: Time-dependent AUC calculation
562
+ - `brier(...)`: Calculate Brier score
563
+ - `integrated_brier(...)`: Calculate integrated Brier score
564
+
565
+ **RMST and Survival Metrics:**
566
+ - `rmst(...)`: Calculate restricted mean survival time
567
+ - `rmst_comparison(...)`: Compare RMST between groups
568
+ - `survival_quantile(...)`: Calculate survival quantiles (median, etc.)
569
+ - `cumulative_incidence(...)`: Calculate cumulative incidence
570
+ - `number_needed_to_treat(...)`: Calculate NNT
571
+
572
+ **Landmark Analysis:**
573
+ - `landmark_analysis(...)`: Perform landmark analysis
574
+ - `landmark_analysis_batch(...)`: Perform batch landmark analysis at multiple time points
575
+ - `conditional_survival(...)`: Calculate conditional survival
576
+ - `hazard_ratio(...)`: Calculate hazard ratios
577
+ - `survival_at_times(...)`: Calculate survival at specific time points
578
+ - `life_table(...)`: Generate life table
579
+
580
+ **Power and Sample Size:**
581
+ - `sample_size_survival(...)`: Calculate required sample size
582
+ - `sample_size_survival_freedman(...)`: Sample size using Freedman's method
583
+ - `power_survival(...)`: Calculate statistical power
584
+ - `expected_events(...)`: Calculate expected number of events
585
+
586
+ **Utilities:**
587
+ - `finegray(...)`: Fine-Gray competing risks model data preparation
588
+ - `perform_pyears_calculation(...)`: Calculate person-years of observation
589
+ - `perform_pystep_calculation(...)`: Perform step calculations
590
+ - `perform_pystep_simple_calculation(...)`: Perform simple step calculations
591
+ - `perform_score_calculation(...)`: Calculate score statistics
592
+ - `perform_agscore3_calculation(...)`: Calculate score statistics (version 3)
593
+ - `survsplit(...)`: Split survival data at specified times
594
+ - `survcondense(...)`: Condense survival data by collapsing adjacent intervals
595
+ - `surv2data(...)`: Convert survival objects to data format
596
+ - `to_timeline(...)`: Convert data to timeline format
597
+ - `from_timeline(...)`: Convert from timeline format to intervals
598
+ - `tmerge(...)`: Merge time-dependent covariates
599
+ - `tmerge2(...)`: Merge time-dependent covariates (version 2)
600
+ - `tmerge3(...)`: Merge time-dependent covariates (version 3)
601
+ - `collapse(...)`: Collapse survival data
602
+ - `coxcount1(...)`: Cox counting process calculations
603
+ - `coxcount2(...)`: Cox counting process calculations (version 2)
604
+ - `agexact(...)`: Exact Anderson-Gill calculations
605
+ - `norisk(...)`: No-risk calculations
606
+ - `cipoisson(...)`: Poisson confidence intervals
607
+ - `cipoisson_exact(...)`: Exact Poisson confidence intervals
608
+ - `cipoisson_anscombe(...)`: Anscombe Poisson confidence intervals
609
+ - `cox_callback(...)`: Cox model callback for iterative fitting
610
+
611
+ ## PSpline Options
612
+
613
+ The `PSpline` class provides penalized spline smoothing:
614
+
615
+ **Constructor Parameters:**
616
+ - `x`: Covariate vector (list of floats)
617
+ - `df`: Degrees of freedom (integer)
618
+ - `theta`: Roughness penalty (float)
619
+ - `eps`: Accuracy for degrees of freedom (float)
620
+ - `method`: Penalty method for tuning parameter selection. Supported methods:
621
+ - `"GCV"` - Generalized Cross-Validation
622
+ - `"UBRE"` - Unbiased Risk Estimator
623
+ - `"REML"` - Restricted Maximum Likelihood
624
+ - `"AIC"` - Akaike Information Criterion
625
+ - `"BIC"` - Bayesian Information Criterion
626
+ - `boundary_knots`: Tuple of (min, max) for the spline basis
627
+ - `intercept`: Whether to include an intercept in the basis
628
+ - `penalty`: Whether or not to apply the penalty
629
+
630
+ **Methods:**
631
+ - `fit()`: Fit the spline model, returns coefficients
632
+ - `predict(new_x)`: Predict values at new x points
633
+
634
+ **Properties:**
635
+ - `coefficients`: Fitted coefficients (None if not fitted)
636
+ - `fitted`: Whether the model has been fitted
637
+ - `df`: Degrees of freedom
638
+ - `eps`: Convergence tolerance
639
+
640
+ ## Development
641
+
642
+ Build the Rust library:
643
+ ```sh
644
+ cargo build
645
+ ```
646
+
647
+ Run tests:
648
+ ```sh
649
+ cargo test
650
+ ```
651
+
652
+ Format code:
653
+ ```sh
654
+ cargo fmt
655
+ ```
656
+
657
+ The codebase is organized with:
658
+ - Core routines in `src/`
659
+ - Tests and examples in `test/`
660
+ - Python bindings using PyO3
661
+
662
+ ## Dependencies
663
+
664
+ - [PyO3](https://github.com/PyO3/pyo3) - Python bindings
665
+ - [ndarray](https://github.com/rust-ndarray/ndarray) - N-dimensional arrays
666
+ - [faer](https://github.com/sarah-ek/faer-rs) - Pure-Rust linear algebra
667
+ - [itertools](https://github.com/rust-itertools/itertools) - Iterator utilities
668
+ - [rayon](https://github.com/rayon-rs/rayon) - Parallel computation
669
+
670
+ ## Compatibility
671
+
672
+ - This build is for Python only. R/extendr bindings are currently disabled.
673
+ - macOS users: Ensure you are using the correct Python version and have Homebrew-installed Python if using Apple Silicon.
674
+
675
+ ## License
676
+
677
+ This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
678
+
@@ -0,0 +1,9 @@
1
+ survival/__init__.py,sha256=trXbzNbtTP6WkyJ3mTYAEJr7YXFlaMYtK9unUtBXW68,442
2
+ survival/_survival.cpython-314-darwin.so,sha256=ESy6dbVkumPavjmlYo2H8g3qDBCPnXv38Kzi4XBJmoU,10326888
3
+ survival/_survival.pyi,sha256=7JM_pLL-hUVfxefLM-PM7zZvu4yMgfPHuEBON-reHp4,20298
4
+ survival/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
5
+ survival/sklearn_compat.py,sha256=CDtuOVvrH5M3XE95-Vb3pzHWeALfwOU55uNOahpLt90,42823
6
+ survival-1.1.36.dist-info/METADATA,sha256=38aFy08in-BXxFn-8pcNZ4oT9Sox2pQL9IyIbIgunzo,22612
7
+ survival-1.1.36.dist-info/WHEEL,sha256=jyP0hJCe-fSX_gEscesIqqW7KerDJw7iyldGx-__w10,107
8
+ survival-1.1.36.dist-info/licenses/LICENSE,sha256=hS2BuXZUcQTPPxaojumqQeGtQjachYGOChZXBWbQQ7E,1070
9
+ survival-1.1.36.dist-info/RECORD,,
@@ -0,0 +1,4 @@
1
+ Wheel-Version: 1.0
2
+ Generator: maturin (1.11.5)
3
+ Root-Is-Purelib: false
4
+ Tag: cp314-cp314-macosx_10_12_x86_64