metacountregressor 0.1.91__tar.gz → 0.1.96__tar.gz

Sign up to get free protection for your applications and to get access to all the features.
Files changed (26) hide show
  1. {metacountregressor-0.1.91 → metacountregressor-0.1.96}/PKG-INFO +143 -8
  2. {metacountregressor-0.1.91 → metacountregressor-0.1.96}/README.rst +150 -8
  3. {metacountregressor-0.1.91 → metacountregressor-0.1.96}/metacountregressor.egg-info/PKG-INFO +143 -8
  4. {metacountregressor-0.1.91 → metacountregressor-0.1.96}/LICENSE.txt +0 -0
  5. {metacountregressor-0.1.91 → metacountregressor-0.1.96}/metacountregressor/__init__.py +0 -0
  6. {metacountregressor-0.1.91 → metacountregressor-0.1.96}/metacountregressor/_device_cust.py +0 -0
  7. {metacountregressor-0.1.91 → metacountregressor-0.1.96}/metacountregressor/data_split_helper.py +0 -0
  8. {metacountregressor-0.1.91 → metacountregressor-0.1.96}/metacountregressor/halton.py +0 -0
  9. {metacountregressor-0.1.91 → metacountregressor-0.1.96}/metacountregressor/helperprocess.py +0 -0
  10. {metacountregressor-0.1.91 → metacountregressor-0.1.96}/metacountregressor/main.py +0 -0
  11. {metacountregressor-0.1.91 → metacountregressor-0.1.96}/metacountregressor/main_old.py +0 -0
  12. {metacountregressor-0.1.91 → metacountregressor-0.1.96}/metacountregressor/metaheuristics.py +0 -0
  13. {metacountregressor-0.1.91 → metacountregressor-0.1.96}/metacountregressor/pareto_file.py +0 -0
  14. {metacountregressor-0.1.91 → metacountregressor-0.1.96}/metacountregressor/pareto_logger__plot.py +0 -0
  15. {metacountregressor-0.1.91 → metacountregressor-0.1.96}/metacountregressor/setup.py +0 -0
  16. {metacountregressor-0.1.91 → metacountregressor-0.1.96}/metacountregressor/single_objective_finder.py +0 -0
  17. {metacountregressor-0.1.91 → metacountregressor-0.1.96}/metacountregressor/solution.py +0 -0
  18. {metacountregressor-0.1.91 → metacountregressor-0.1.96}/metacountregressor/test_generated_paper2.py +0 -0
  19. {metacountregressor-0.1.91 → metacountregressor-0.1.96}/metacountregressor.egg-info/SOURCES.txt +0 -0
  20. {metacountregressor-0.1.91 → metacountregressor-0.1.96}/metacountregressor.egg-info/dependency_links.txt +0 -0
  21. {metacountregressor-0.1.91 → metacountregressor-0.1.96}/metacountregressor.egg-info/not-zip-safe +0 -0
  22. {metacountregressor-0.1.91 → metacountregressor-0.1.96}/metacountregressor.egg-info/requires.txt +0 -0
  23. {metacountregressor-0.1.91 → metacountregressor-0.1.96}/metacountregressor.egg-info/top_level.txt +0 -0
  24. {metacountregressor-0.1.91 → metacountregressor-0.1.96}/setup.cfg +0 -0
  25. {metacountregressor-0.1.91 → metacountregressor-0.1.96}/setup.py +0 -0
  26. {metacountregressor-0.1.91 → metacountregressor-0.1.96}/tests/test.py +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: metacountregressor
3
- Version: 0.1.91
3
+ Version: 0.1.96
4
4
  Summary: Extensions for a Python package for estimation of count models.
5
5
  Home-page: https://github.com/zahern/CountDataEstimation
6
6
  Author: Zeke Ahern
@@ -274,6 +274,8 @@ Let's begin by fitting very simple models and use the structure of these models
274
274
 
275
275
 
276
276
  ```python
277
+
278
+ '''Setup Data'''
277
279
  df = pd.read_csv(
278
280
  "https://raw.githubusercontent.com/zahern/data/main/Ex-16-3.csv")
279
281
  X = df
@@ -281,25 +283,158 @@ y = df['FREQ'] # Frequency of crashes
281
283
  X['Offset'] = np.log(df['AADT']) # Explicitley define how to offset the data, no offset otherwise
282
284
  # Drop Y, selected offset term and ID as there are no panels
283
285
  X = df.drop(columns=['FREQ', 'ID', 'AADT'])
284
-
286
+ '''Aguments for Solution'''
285
287
  arguments = {
286
- 'algorithm': 'hs', #alternatively input 'de', or 'sa'
287
- 'is_multi': 1,
288
+ 'is_multi': 1, #is two objectives considered
288
289
  'test_percentage': 0.2, # used in multi-objective optimisation only. Saves 20% of data for testing.
289
290
  'val_percentage:': 0.2, # Saves 20% of data for testing.
290
291
  'test_complexity': 3, # For Very simple Models
291
292
  'obj_1': 'BIC', '_obj_2': 'RMSE_TEST',
292
- 'instance_number': 'name', # used for creeating a named folder where your models are saved into from the directory
293
+ 'instance_number': 'hs_run', # used for creeating a named folder where your models are saved into from the directory
293
294
  'distribution': ['Normal'],
294
- 'Model': [0], # or equivalently ['POS', 'NB']
295
+ 'Model': [0, 1], # or equivalently ['POS', 'NB']
295
296
  'transformations': ['no', 'sqrt', 'archsinh'],
296
297
  '_max_time': 10000
297
- }
298
+ } '''Arguments for the solution algorithm'''
299
+ argument_hs = {
300
+ '_hms': 20, #harmony memory size,
301
+ '_mpai': 1, #adjustement inded
302
+ '_par': 0.3,
303
+ '_hmcr': .5
304
+ }
298
305
  obj_fun = ObjectiveFunction(X, y, **arguments)
299
- results = harmony_search(obj_fun)
306
+ results = harmony_search(obj_fun, None, argument_hs)
300
307
  print(results)
301
308
  ```
302
309
 
310
+ ## Example: Assistance by Differential Evololution and Simulated Annealing
311
+ Similiar to the above example we only need to change the hyperparamaters, the obj_fun can remane the same
312
+
313
+
314
+ ```python
315
+ argument_de = {'_AI': 2,
316
+ '_crossover_perc': .2,
317
+ '_max_iter': 1000,
318
+ '_pop_size': 25
319
+ }
320
+ de_results = differential_evolution(obj_fun, None, **argument_de)
321
+ print(de_results)
322
+
323
+
324
+ args_sa = {'alpha': .99,
325
+ 'STEPS_PER_TEMP': 10,
326
+ 'INTL_ACPT': 0.5,
327
+ '_crossover_perc': .3,
328
+ 'MAX_ITERATIONS': 1000,
329
+ '_num_intl_slns': 25,
330
+ }
331
+
332
+ sa_results = simulated_annealing(obj_fun, None, **args_sa)
333
+ print(sa_results)
334
+ ```
335
+
336
+ ## Comparing to statsmodels
337
+ The following example illustrates how the output compares to well-known packages, including Statsmodels."
338
+
339
+
340
+ ```python
341
+ # Load modules and data
342
+ import statsmodels.api as sm
343
+
344
+ data = sm.datasets.sunspots.load_pandas().data
345
+ #print(data.exog)
346
+ data_exog = data['YEAR']
347
+ data_exog = sm.add_constant(data_exog)
348
+ data_endog = data['SUNACTIVITY']
349
+
350
+ # Instantiate a gamma family model with the default link function.
351
+ import numpy as np
352
+
353
+ gamma_model = sm.NegativeBinomial(data_endog, data_exog)
354
+ gamma_results = gamma_model.fit()
355
+
356
+ print(gamma_results.summary())
357
+
358
+
359
+
360
+
361
+ #NOW LET's COMPARE THIS TO METACOUNTREGRESSOR
362
+
363
+
364
+
365
+
366
+ #Model Decisions,
367
+ manual_fit_spec = {
368
+ 'fixed_terms': ['const','YEAR'],
369
+ 'rdm_terms': [],
370
+ 'rdm_cor_terms': [],
371
+ 'grouped_terms': [],
372
+ 'hetro_in_means': [],
373
+ 'transformations': ['no', 'no'],
374
+ 'dispersion': 1 #Negative Binomial
375
+ }
376
+
377
+
378
+ #Arguments
379
+ arguments = {
380
+ 'algorithm': 'hs',
381
+ 'test_percentage': 0,
382
+ 'test_complexity': 6,
383
+ 'instance_number': 'name',
384
+ 'Manual_Fit': manual_fit_spec
385
+ }
386
+ obj_fun = ObjectiveFunction(data_exog, data_endog, **arguments)
387
+
388
+
389
+
390
+
391
+
392
+
393
+
394
+ ```
395
+
396
+ Optimization terminated successfully.
397
+ Current function value: 4.877748
398
+ Iterations: 22
399
+ Function evaluations: 71
400
+ Gradient evaluations: 70
401
+ NegativeBinomial Regression Results
402
+ ==============================================================================
403
+ Dep. Variable: SUNACTIVITY No. Observations: 309
404
+ Model: NegativeBinomial Df Residuals: 307
405
+ Method: MLE Df Model: 1
406
+ Date: Tue, 13 Aug 2024 Pseudo R-squ.: 0.004087
407
+ Time: 14:13:22 Log-Likelihood: -1507.2
408
+ converged: True LL-Null: -1513.4
409
+ Covariance Type: nonrobust LLR p-value: 0.0004363
410
+ ==============================================================================
411
+ coef std err z P>|z| [0.025 0.975]
412
+ ------------------------------------------------------------------------------
413
+ const 0.2913 1.017 0.287 0.774 -1.701 2.284
414
+ YEAR 0.0019 0.001 3.546 0.000 0.001 0.003
415
+ alpha 0.7339 0.057 12.910 0.000 0.622 0.845
416
+ ==============================================================================
417
+ 0.1.88
418
+ Setup Complete...
419
+ Benchmaking test with Seed 42
420
+ 1
421
+ --------------------------------------------------------------------------------
422
+ Log-Likelihood: -1509.0683662284273
423
+ --------------------------------------------------------------------------------
424
+ bic: 3035.84
425
+ --------------------------------------------------------------------------------
426
+ MSE: 10000000.00
427
+ +--------+--------+-------+----------+----------+------------+
428
+ | Effect | $\tau$ | Coeff | Std. Err | z-values | Prob |z|>Z |
429
+ +========+========+=======+==========+==========+============+
430
+ | const | no | 0.10 | 0.25 | 0.39 | 0.70 |
431
+ +--------+--------+-------+----------+----------+------------+
432
+ | YEAR | no | 0.00 | 0.00 | 20.39 | 0.00*** |
433
+ +--------+--------+-------+----------+----------+------------+
434
+ | nb | | 1.33 | 0.00 | 50.00 | 0.00*** |
435
+ +--------+--------+-------+----------+----------+------------+
436
+
437
+
303
438
  ## Paper
304
439
 
305
440
  The following tutorial is in conjunction with our latest paper. A link the current paper can be found here [MetaCountRegressor](https://www.overleaf.com/read/mszwpwzcxsng#c5eb0c)
@@ -9,7 +9,7 @@ Tutorial also available as a jupyter notebook
9
9
  =============================================
10
10
 
11
11
  `Download Example
12
- Notebook <https://github.com/zahern/CountDataEstimation/blob/main/README.ipynb>`__
12
+ Notebook <https://github.com/zahern/CountDataEstimation/blob/main/Tutorial.ipynb>`__
13
13
 
14
14
  The tutorial provides more extensive examples on how to run the code and
15
15
  perform experiments. Further documentation is currently in development.
@@ -376,6 +376,8 @@ factors for our search.
376
376
 
377
377
  .. code:: ipython3
378
378
 
379
+
380
+ '''Setup Data'''
379
381
  df = pd.read_csv(
380
382
  "https://raw.githubusercontent.com/zahern/data/main/Ex-16-3.csv")
381
383
  X = df
@@ -383,24 +385,164 @@ factors for our search.
383
385
  X['Offset'] = np.log(df['AADT']) # Explicitley define how to offset the data, no offset otherwise
384
386
  # Drop Y, selected offset term and ID as there are no panels
385
387
  X = df.drop(columns=['FREQ', 'ID', 'AADT'])
386
-
388
+ '''Aguments for Solution'''
387
389
  arguments = {
388
- 'algorithm': 'hs', #alternatively input 'de', or 'sa'
389
- 'is_multi': 1,
390
+ 'is_multi': 1, #is two objectives considered
390
391
  'test_percentage': 0.2, # used in multi-objective optimisation only. Saves 20% of data for testing.
391
392
  'val_percentage:': 0.2, # Saves 20% of data for testing.
392
393
  'test_complexity': 3, # For Very simple Models
393
394
  'obj_1': 'BIC', '_obj_2': 'RMSE_TEST',
394
- 'instance_number': 'name', # used for creeating a named folder where your models are saved into from the directory
395
+ 'instance_number': 'hs_run', # used for creeating a named folder where your models are saved into from the directory
395
396
  'distribution': ['Normal'],
396
- 'Model': [0], # or equivalently ['POS', 'NB']
397
+ 'Model': [0, 1], # or equivalently ['POS', 'NB']
397
398
  'transformations': ['no', 'sqrt', 'archsinh'],
398
399
  '_max_time': 10000
399
- }
400
+ } '''Arguments for the solution algorithm'''
401
+ argument_hs = {
402
+ '_hms': 20, #harmony memory size,
403
+ '_mpai': 1, #adjustement inded
404
+ '_par': 0.3,
405
+ '_hmcr': .5
406
+ }
400
407
  obj_fun = ObjectiveFunction(X, y, **arguments)
401
- results = harmony_search(obj_fun)
408
+ results = harmony_search(obj_fun, None, argument_hs)
402
409
  print(results)
403
410
 
411
+ Example: Assistance by Differential Evololution and Simulated Annealing
412
+ -----------------------------------------------------------------------
413
+
414
+ Similiar to the above example we only need to change the
415
+ hyperparamaters, the obj_fun can remane the same
416
+
417
+ .. code:: ipython3
418
+
419
+ argument_de = {'_AI': 2,
420
+ '_crossover_perc': .2,
421
+ '_max_iter': 1000,
422
+ '_pop_size': 25
423
+ }
424
+ de_results = differential_evolution(obj_fun, None, **argument_de)
425
+ print(de_results)
426
+
427
+
428
+ args_sa = {'alpha': .99,
429
+ 'STEPS_PER_TEMP': 10,
430
+ 'INTL_ACPT': 0.5,
431
+ '_crossover_perc': .3,
432
+ 'MAX_ITERATIONS': 1000,
433
+ '_num_intl_slns': 25,
434
+ }
435
+
436
+ sa_results = simulated_annealing(obj_fun, None, **args_sa)
437
+ print(sa_results)
438
+
439
+ Comparing to statsmodels
440
+ ------------------------
441
+
442
+ The following example illustrates how the output compares to well-known
443
+ packages, including Statsmodels.�
444
+
445
+ .. code:: ipython3
446
+
447
+ # Load modules and data
448
+ import statsmodels.api as sm
449
+
450
+ data = sm.datasets.sunspots.load_pandas().data
451
+ #print(data.exog)
452
+ data_exog = data['YEAR']
453
+ data_exog = sm.add_constant(data_exog)
454
+ data_endog = data['SUNACTIVITY']
455
+
456
+ # Instantiate a gamma family model with the default link function.
457
+ import numpy as np
458
+
459
+ gamma_model = sm.NegativeBinomial(data_endog, data_exog)
460
+ gamma_results = gamma_model.fit()
461
+
462
+ print(gamma_results.summary())
463
+
464
+
465
+
466
+
467
+ #NOW LET's COMPARE THIS TO METACOUNTREGRESSOR
468
+
469
+
470
+
471
+
472
+ #Model Decisions,
473
+ manual_fit_spec = {
474
+ 'fixed_terms': ['const','YEAR'],
475
+ 'rdm_terms': [],
476
+ 'rdm_cor_terms': [],
477
+ 'grouped_terms': [],
478
+ 'hetro_in_means': [],
479
+ 'transformations': ['no', 'no'],
480
+ 'dispersion': 1 #Negative Binomial
481
+ }
482
+
483
+
484
+ #Arguments
485
+ arguments = {
486
+ 'algorithm': 'hs',
487
+ 'test_percentage': 0,
488
+ 'test_complexity': 6,
489
+ 'instance_number': 'name',
490
+ 'Manual_Fit': manual_fit_spec
491
+ }
492
+ obj_fun = ObjectiveFunction(data_exog, data_endog, **arguments)
493
+
494
+
495
+
496
+
497
+
498
+
499
+
500
+
501
+
502
+ .. parsed-literal::
503
+
504
+ Optimization terminated successfully.
505
+ Current function value: 4.877748
506
+ Iterations: 22
507
+ Function evaluations: 71
508
+ Gradient evaluations: 70
509
+ NegativeBinomial Regression Results
510
+ ==============================================================================
511
+ Dep. Variable: SUNACTIVITY No. Observations: 309
512
+ Model: NegativeBinomial Df Residuals: 307
513
+ Method: MLE Df Model: 1
514
+ Date: Tue, 13 Aug 2024 Pseudo R-squ.: 0.004087
515
+ Time: 14:13:22 Log-Likelihood: -1507.2
516
+ converged: True LL-Null: -1513.4
517
+ Covariance Type: nonrobust LLR p-value: 0.0004363
518
+ ==============================================================================
519
+ coef std err z P>|z| [0.025 0.975]
520
+ ------------------------------------------------------------------------------
521
+ const 0.2913 1.017 0.287 0.774 -1.701 2.284
522
+ YEAR 0.0019 0.001 3.546 0.000 0.001 0.003
523
+ alpha 0.7339 0.057 12.910 0.000 0.622 0.845
524
+ ==============================================================================
525
+ 0.1.88
526
+ Setup Complete...
527
+ Benchmaking test with Seed 42
528
+ 1
529
+ --------------------------------------------------------------------------------
530
+ Log-Likelihood: -1509.0683662284273
531
+ --------------------------------------------------------------------------------
532
+ bic: 3035.84
533
+ --------------------------------------------------------------------------------
534
+ MSE: 10000000.00
535
+ +--------+--------+-------+----------+----------+------------+
536
+ | Effect | $\tau$ | Coeff | Std. Err | z-values | Prob |z|>Z |
537
+ +========+========+=======+==========+==========+============+
538
+ | const | no | 0.10 | 0.25 | 0.39 | 0.70 |
539
+ +--------+--------+-------+----------+----------+------------+
540
+ | YEAR | no | 0.00 | 0.00 | 20.39 | 0.00*** |
541
+ +--------+--------+-------+----------+----------+------------+
542
+ | nb | | 1.33 | 0.00 | 50.00 | 0.00*** |
543
+ +--------+--------+-------+----------+----------+------------+
544
+
545
+
404
546
  Paper
405
547
  -----
406
548
 
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: metacountregressor
3
- Version: 0.1.91
3
+ Version: 0.1.96
4
4
  Summary: Extensions for a Python package for estimation of count models.
5
5
  Home-page: https://github.com/zahern/CountDataEstimation
6
6
  Author: Zeke Ahern
@@ -274,6 +274,8 @@ Let's begin by fitting very simple models and use the structure of these models
274
274
 
275
275
 
276
276
  ```python
277
+
278
+ '''Setup Data'''
277
279
  df = pd.read_csv(
278
280
  "https://raw.githubusercontent.com/zahern/data/main/Ex-16-3.csv")
279
281
  X = df
@@ -281,25 +283,158 @@ y = df['FREQ'] # Frequency of crashes
281
283
  X['Offset'] = np.log(df['AADT']) # Explicitley define how to offset the data, no offset otherwise
282
284
  # Drop Y, selected offset term and ID as there are no panels
283
285
  X = df.drop(columns=['FREQ', 'ID', 'AADT'])
284
-
286
+ '''Aguments for Solution'''
285
287
  arguments = {
286
- 'algorithm': 'hs', #alternatively input 'de', or 'sa'
287
- 'is_multi': 1,
288
+ 'is_multi': 1, #is two objectives considered
288
289
  'test_percentage': 0.2, # used in multi-objective optimisation only. Saves 20% of data for testing.
289
290
  'val_percentage:': 0.2, # Saves 20% of data for testing.
290
291
  'test_complexity': 3, # For Very simple Models
291
292
  'obj_1': 'BIC', '_obj_2': 'RMSE_TEST',
292
- 'instance_number': 'name', # used for creeating a named folder where your models are saved into from the directory
293
+ 'instance_number': 'hs_run', # used for creeating a named folder where your models are saved into from the directory
293
294
  'distribution': ['Normal'],
294
- 'Model': [0], # or equivalently ['POS', 'NB']
295
+ 'Model': [0, 1], # or equivalently ['POS', 'NB']
295
296
  'transformations': ['no', 'sqrt', 'archsinh'],
296
297
  '_max_time': 10000
297
- }
298
+ } '''Arguments for the solution algorithm'''
299
+ argument_hs = {
300
+ '_hms': 20, #harmony memory size,
301
+ '_mpai': 1, #adjustement inded
302
+ '_par': 0.3,
303
+ '_hmcr': .5
304
+ }
298
305
  obj_fun = ObjectiveFunction(X, y, **arguments)
299
- results = harmony_search(obj_fun)
306
+ results = harmony_search(obj_fun, None, argument_hs)
300
307
  print(results)
301
308
  ```
302
309
 
310
+ ## Example: Assistance by Differential Evololution and Simulated Annealing
311
+ Similiar to the above example we only need to change the hyperparamaters, the obj_fun can remane the same
312
+
313
+
314
+ ```python
315
+ argument_de = {'_AI': 2,
316
+ '_crossover_perc': .2,
317
+ '_max_iter': 1000,
318
+ '_pop_size': 25
319
+ }
320
+ de_results = differential_evolution(obj_fun, None, **argument_de)
321
+ print(de_results)
322
+
323
+
324
+ args_sa = {'alpha': .99,
325
+ 'STEPS_PER_TEMP': 10,
326
+ 'INTL_ACPT': 0.5,
327
+ '_crossover_perc': .3,
328
+ 'MAX_ITERATIONS': 1000,
329
+ '_num_intl_slns': 25,
330
+ }
331
+
332
+ sa_results = simulated_annealing(obj_fun, None, **args_sa)
333
+ print(sa_results)
334
+ ```
335
+
336
+ ## Comparing to statsmodels
337
+ The following example illustrates how the output compares to well-known packages, including Statsmodels."
338
+
339
+
340
+ ```python
341
+ # Load modules and data
342
+ import statsmodels.api as sm
343
+
344
+ data = sm.datasets.sunspots.load_pandas().data
345
+ #print(data.exog)
346
+ data_exog = data['YEAR']
347
+ data_exog = sm.add_constant(data_exog)
348
+ data_endog = data['SUNACTIVITY']
349
+
350
+ # Instantiate a gamma family model with the default link function.
351
+ import numpy as np
352
+
353
+ gamma_model = sm.NegativeBinomial(data_endog, data_exog)
354
+ gamma_results = gamma_model.fit()
355
+
356
+ print(gamma_results.summary())
357
+
358
+
359
+
360
+
361
+ #NOW LET's COMPARE THIS TO METACOUNTREGRESSOR
362
+
363
+
364
+
365
+
366
+ #Model Decisions,
367
+ manual_fit_spec = {
368
+ 'fixed_terms': ['const','YEAR'],
369
+ 'rdm_terms': [],
370
+ 'rdm_cor_terms': [],
371
+ 'grouped_terms': [],
372
+ 'hetro_in_means': [],
373
+ 'transformations': ['no', 'no'],
374
+ 'dispersion': 1 #Negative Binomial
375
+ }
376
+
377
+
378
+ #Arguments
379
+ arguments = {
380
+ 'algorithm': 'hs',
381
+ 'test_percentage': 0,
382
+ 'test_complexity': 6,
383
+ 'instance_number': 'name',
384
+ 'Manual_Fit': manual_fit_spec
385
+ }
386
+ obj_fun = ObjectiveFunction(data_exog, data_endog, **arguments)
387
+
388
+
389
+
390
+
391
+
392
+
393
+
394
+ ```
395
+
396
+ Optimization terminated successfully.
397
+ Current function value: 4.877748
398
+ Iterations: 22
399
+ Function evaluations: 71
400
+ Gradient evaluations: 70
401
+ NegativeBinomial Regression Results
402
+ ==============================================================================
403
+ Dep. Variable: SUNACTIVITY No. Observations: 309
404
+ Model: NegativeBinomial Df Residuals: 307
405
+ Method: MLE Df Model: 1
406
+ Date: Tue, 13 Aug 2024 Pseudo R-squ.: 0.004087
407
+ Time: 14:13:22 Log-Likelihood: -1507.2
408
+ converged: True LL-Null: -1513.4
409
+ Covariance Type: nonrobust LLR p-value: 0.0004363
410
+ ==============================================================================
411
+ coef std err z P>|z| [0.025 0.975]
412
+ ------------------------------------------------------------------------------
413
+ const 0.2913 1.017 0.287 0.774 -1.701 2.284
414
+ YEAR 0.0019 0.001 3.546 0.000 0.001 0.003
415
+ alpha 0.7339 0.057 12.910 0.000 0.622 0.845
416
+ ==============================================================================
417
+ 0.1.88
418
+ Setup Complete...
419
+ Benchmaking test with Seed 42
420
+ 1
421
+ --------------------------------------------------------------------------------
422
+ Log-Likelihood: -1509.0683662284273
423
+ --------------------------------------------------------------------------------
424
+ bic: 3035.84
425
+ --------------------------------------------------------------------------------
426
+ MSE: 10000000.00
427
+ +--------+--------+-------+----------+----------+------------+
428
+ | Effect | $\tau$ | Coeff | Std. Err | z-values | Prob |z|>Z |
429
+ +========+========+=======+==========+==========+============+
430
+ | const | no | 0.10 | 0.25 | 0.39 | 0.70 |
431
+ +--------+--------+-------+----------+----------+------------+
432
+ | YEAR | no | 0.00 | 0.00 | 20.39 | 0.00*** |
433
+ +--------+--------+-------+----------+----------+------------+
434
+ | nb | | 1.33 | 0.00 | 50.00 | 0.00*** |
435
+ +--------+--------+-------+----------+----------+------------+
436
+
437
+
303
438
  ## Paper
304
439
 
305
440
  The following tutorial is in conjunction with our latest paper. A link the current paper can be found here [MetaCountRegressor](https://www.overleaf.com/read/mszwpwzcxsng#c5eb0c)