imsciences 0.6.3.0__py3-none-any.whl → 0.6.3.2__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,383 @@
1
+ Metadata-Version: 2.1
2
+ Name: imsciences
3
+ Version: 0.6.3.2
4
+ Summary: IMS Data Processing Package
5
+ Author: IMS
6
+ Author-email: cam@im-sciences.com
7
+ License: MIT
8
+ Keywords: python,data processing,apis
9
+ Classifier: Development Status :: 3 - Alpha
10
+ Classifier: Intended Audience :: Developers
11
+ Classifier: Programming Language :: Python :: 3
12
+ Classifier: Operating System :: Unix
13
+ Classifier: Operating System :: MacOS :: MacOS X
14
+ Classifier: Operating System :: Microsoft :: Windows
15
+ Description-Content-Type: text/markdown
16
+ Requires-Dist: pandas
17
+ Requires-Dist: plotly
18
+ Requires-Dist: numpy
19
+ Requires-Dist: fredapi
20
+ Requires-Dist: requests-cache
21
+ Requires-Dist: geopy
22
+ Requires-Dist: bs4
23
+
24
+ # IMS Package Documentation
25
+
26
+ The IMS package is a python library for processing incoming data into a format that can be used for specifically for econometrics projects that use weekly timeseries data. IMS processing offers a variety of functions to manipulate and analyze data efficiently. Here are the functionalities provided by the package:
27
+
28
+ ## Data Processing
29
+
30
+ # Function Descriptions and Usage Examples
31
+
32
+ ## 1. `get_wd_levels`
33
+ - **Description**: Get the working directory with the option of moving up parents.
34
+ - **Usage**: `get_wd_levels(levels)`
35
+ - **Example**: `get_wd_levels(0)`
36
+
37
+ ---
38
+
39
+ ## 2. `remove_rows`
40
+ - **Description**: Removes a specified number of rows from a pandas DataFrame.
41
+ - **Usage**: `remove_rows(data_frame, num_rows_to_remove)`
42
+ - **Example**: `remove_rows(df, 2)`
43
+
44
+ ---
45
+
46
+ ## 3. `aggregate_daily_to_wc_long`
47
+ - **Description**: Aggregates daily data into weekly data, grouping and summing specified columns, starting on a specified day of the week.
48
+ - **Usage**: `aggregate_daily_to_wc_long(df, date_column, group_columns, sum_columns, wc, aggregation='sum')`
49
+ - **Example**: `aggregate_daily_to_wc_long(df, 'date', ['platform'], ['cost', 'impressions', 'clicks'], 'mon', 'average')`
50
+
51
+ ---
52
+
53
+ ## 4. `convert_monthly_to_daily`
54
+ - **Description**: Converts monthly data in a DataFrame to daily data by expanding and dividing the numeric values.
55
+ - **Usage**: `convert_monthly_to_daily(df, date_column, divide)`
56
+ - **Example**: `convert_monthly_to_daily(df, 'date')`
57
+
58
+ ---
59
+
60
+ ## 5. `plot_two`
61
+ - **Description**: Plots specified columns from two different DataFrames using a shared date column. Useful for comparing data.
62
+ - **Usage**: `plot_two(df1, col1, df2, col2, date_column, same_axis=True)`
63
+ - **Example**: `plot_two(df1, 'cost', df2, 'cost', 'obs', True)`
64
+
65
+ ---
66
+
67
+ ## 6. `remove_nan_rows`
68
+ - **Description**: Removes rows from a DataFrame where the specified column has NaN values.
69
+ - **Usage**: `remove_nan_rows(df, col_to_remove_rows)`
70
+ - **Example**: `remove_nan_rows(df, 'date')`
71
+
72
+ ---
73
+
74
+ ## 7. `filter_rows`
75
+ - **Description**: Filters the DataFrame based on whether the values in a specified column are in a provided list.
76
+ - **Usage**: `filter_rows(df, col_to_filter, list_of_filters)`
77
+ - **Example**: `filter_rows(df, 'country', ['UK', 'IE'])`
78
+
79
+ ---
80
+
81
+ ## 8. `plot_one`
82
+ - **Description**: Plots a specified column from a DataFrame.
83
+ - **Usage**: `plot_one(df1, col1, date_column)`
84
+ - **Example**: `plot_one(df, 'Spend', 'OBS')`
85
+
86
+ ---
87
+
88
+ ## 9. `week_of_year_mapping`
89
+ - **Description**: Converts a week column in `yyyy-Www` or `yyyy-ww` format to week commencing date.
90
+ - **Usage**: `week_of_year_mapping(df, week_col, start_day_str)`
91
+ - **Example**: `week_of_year_mapping(df, 'week', 'mon')`
92
+
93
+ ---
94
+
95
+ ## 10. `exclude_rows`
96
+ - **Description**: Removes rows from a DataFrame based on whether the values in a specified column are not in a provided list.
97
+ - **Usage**: `exclude_rows(df, col_to_filter, list_of_filters)`
98
+ - **Example**: `exclude_rows(df, 'week', ['2022-W20', '2022-W21'])`
99
+
100
+ ---
101
+
102
+ ## 11. `rename_cols`
103
+ - **Description**: Renames columns in a pandas DataFrame.
104
+ - **Usage**: `rename_cols(df, name)`
105
+ - **Example**: `rename_cols(df, 'ame_facebook')`
106
+
107
+ ---
108
+
109
+ ## 12. `merge_new_and_old`
110
+ - **Description**: Creates a new DataFrame with two columns: one for dates and one for merged numeric values.
111
+ - Merges numeric values from specified columns in the old and new DataFrames based on a given cutoff date.
112
+ - **Usage**: `merge_new_and_old(old_df, old_col, new_df, new_col, cutoff_date, date_col_name='OBS')`
113
+ - **Example**: `merge_new_and_old(df1, 'old_col', df2, 'new_col', '2023-01-15')`
114
+
115
+ ---
116
+
117
+ ## 13. `merge_dataframes_on_date`
118
+ - **Description**: Merge a list of DataFrames on a common column.
119
+ - **Usage**: `merge_dataframes_on_date(dataframes, common_column='OBS', merge_how='outer')`
120
+ - **Example**: `merge_dataframes_on_date([df1, df2, df3], common_column='OBS', merge_how='outer')`
121
+
122
+ ---
123
+
124
+ ## 14. `merge_and_update_dfs`
125
+ - **Description**: Merges two dataframes on a key column, updates the first dataframe's columns with the second's where available, and returns a dataframe sorted by the key column.
126
+ - **Usage**: `merge_and_update_dfs(df1, df2, key_column)`
127
+ - **Example**: `merge_and_update_dfs(processed_facebook, finalised_meta, 'OBS')`
128
+
129
+ ---
130
+
131
+ ## 15. `convert_us_to_uk_dates`
132
+ - **Description**: Convert a DataFrame column with mixed date formats to datetime.
133
+ - **Usage**: `convert_us_to_uk_dates(df, date_col)`
134
+ - **Example**: `convert_us_to_uk_dates(df, 'date')`
135
+
136
+ ---
137
+
138
+ ### 16. combine_sheets
139
+ - **Description**: Combines multiple DataFrames from a dictionary into a single DataFrame.
140
+ - **Usage**: `combine_sheets(all_sheets)`
141
+ - **Example**: `combine_sheets({'Sheet1': df1, 'Sheet2': df2})`
142
+
143
+ ---
144
+
145
+ ## 17. `pivot_table`
146
+ - **Description**: Dynamically pivots a DataFrame based on specified columns.
147
+ - **Usage**: `pivot_table(df, index_col, columns, values_col, filters_dict=None, fill_value=0, aggfunc='sum', margins=False, margins_name='Total', datetime_trans_needed=True, reverse_header_order=False, fill_missing_weekly_dates=False, week_commencing='W-MON')`
148
+ - **Example**: `pivot_table(df, 'OBS', 'Channel Short Names', 'Value', filters_dict={'Master Include': ' == 1', 'OBS': ' >= datetime(2019,9,9)', 'Metric Short Names': ' == spd'}, fill_value=0, aggfunc='sum', margins=False, margins_name='Total', datetime_trans_needed=True, reverse_header_order=True, fill_missing_weekly_dates=True, week_commencing='W-MON')`
149
+
150
+ ---
151
+
152
+ ## 18. `apply_lookup_table_for_columns`
153
+ - **Description**: Equivalent of XLOOKUP in Excel. Allows mapping of a dictionary of substrings within a column.
154
+ - **Usage**: `apply_lookup_table_for_columns(df, col_names, to_find_dict, if_not_in_dict='Other', new_column_name='Mapping')`
155
+ - **Example**: `apply_lookup_table_for_columns(df, col_names, {'spend': 'spd', 'clicks': 'clk'}, if_not_in_dict='Other', new_column_name='Metrics Short')`
156
+
157
+ ---
158
+
159
+ ## 19. `aggregate_daily_to_wc_wide`
160
+ - **Description**: Aggregates daily data into weekly data, grouping and summing specified columns, starting on a specified day of the week.
161
+ - **Usage**: `aggregate_daily_to_wc_wide(df, date_column, group_columns, sum_columns, wc, aggregation='sum', include_totals=False)`
162
+ - **Example**: `aggregate_daily_to_wc_wide(df, 'date', ['platform'], ['cost', 'impressions', 'clicks'], 'mon', 'average', True)`
163
+
164
+ ---
165
+
166
+ ## 20. `merge_cols_with_seperator`
167
+ - **Description**: Merges multiple columns in a DataFrame into one column with a separator `_`. Useful for lookup tables.
168
+ - **Usage**: `merge_cols_with_seperator(df, col_names, seperator='_', output_column_name='Merged', starting_prefix_str=None, ending_prefix_str=None)`
169
+ - **Example**: `merge_cols_with_seperator(df, ['Campaign', 'Product'], seperator='|', output_column_name='Merged Columns', starting_prefix_str='start_', ending_prefix_str='_end')`
170
+
171
+ ---
172
+
173
+ ## 21. `check_sum_of_df_cols_are_equal`
174
+ - **Description**: Checks if the sum of two columns in two DataFrames are the same, and provides the sums and differences.
175
+ - **Usage**: `check_sum_of_df_cols_are_equal(df_1, df_2, cols_1, cols_2)`
176
+ - **Example**: `check_sum_of_df_cols_are_equal(df_1, df_2, 'Media Cost', 'Spend')`
177
+
178
+ ---
179
+
180
+ ## 22. `convert_2_df_cols_to_dict`
181
+ - **Description**: Creates a dictionary using two columns in a DataFrame.
182
+ - **Usage**: `convert_2_df_cols_to_dict(df, key_col, value_col)`
183
+ - **Example**: `convert_2_df_cols_to_dict(df, 'Campaign', 'Channel')`
184
+
185
+ ---
186
+
187
+ ## 23. `create_FY_and_H_columns`
188
+ - **Description**: Creates financial year, half-year, and financial half-year columns.
189
+ - **Usage**: `create_FY_and_H_columns(df, index_col, start_date, starting_FY, short_format='No', half_years='No', combined_FY_and_H='No')`
190
+ - **Example**: `create_FY_and_H_columns(df, 'Week (M-S)', '2022-10-03', 'FY2023', short_format='Yes', half_years='Yes', combined_FY_and_H='Yes')`
191
+
192
+ ---
193
+
194
+ ## 24. `keyword_lookup_replacement`
195
+ - **Description**: Updates chosen values in a specified column of the DataFrame based on a lookup dictionary.
196
+ - **Usage**: `keyword_lookup_replacement(df, col, replacement_rows, cols_to_merge, replacement_lookup_dict, output_column_name='Updated Column')`
197
+ - **Example**: `keyword_lookup_replacement(df, 'channel', 'Paid Search Generic', ['channel', 'segment', 'product'], qlik_dict_for_channel, output_column_name='Channel New')`
198
+
199
+ ---
200
+
201
+ ## 25. `create_new_version_of_col_using_LUT`
202
+ - **Description**: Creates a new column in a DataFrame by mapping values from an old column using a lookup table.
203
+ - **Usage**: `create_new_version_of_col_using_LUT(df, keys_col, value_col, dict_for_specific_changes, new_col_name='New Version of Old Col')`
204
+ - **Example**: `create_new_version_of_col_using_LUT(df, 'Campaign Name', 'Campaign Type', search_campaign_name_retag_lut, 'Campaign Name New')`
205
+
206
+ ---
207
+
208
+ ## 26. `convert_df_wide_2_long`
209
+ - **Description**: Converts a DataFrame from wide to long format.
210
+ - **Usage**: `convert_df_wide_2_long(df, value_cols, variable_col_name='Stacked', value_col_name='Value')`
211
+ - **Example**: `convert_df_wide_2_long(df, ['Media Cost', 'Impressions', 'Clicks'], variable_col_name='Metric')`
212
+
213
+ ---
214
+
215
+ ## 27. `manually_edit_data`
216
+ - **Description**: Enables manual updates to DataFrame cells by applying filters and editing a column.
217
+ - **Usage**: `manually_edit_data(df, filters_dict, col_to_change, new_value, change_in_existing_df_col='No', new_col_to_change_name='New', manual_edit_col_name=None, add_notes='No', existing_note_col_name=None, note=None)`
218
+ - **Example**: `manually_edit_data(df, {'OBS': ' <= datetime(2023,1,23)', 'File_Name': ' == France media'}, 'Master Include', 1, change_in_existing_df_col='Yes', new_col_to_change_name='Master Include', manual_edit_col_name='Manual Changes')`
219
+
220
+ ---
221
+
222
+ ## 28. `format_numbers_with_commas`
223
+ - **Description**: Formats numeric data into numbers with commas and specified decimal places.
224
+ - **Usage**: `format_numbers_with_commas(df, decimal_length_chosen=2)`
225
+ - **Example**: `format_numbers_with_commas(df, 1)`
226
+
227
+ ---
228
+
229
+ ## 29. `filter_df_on_multiple_conditions`
230
+ - **Description**: Filters a DataFrame based on multiple conditions from a dictionary.
231
+ - **Usage**: `filter_df_on_multiple_conditions(df, filters_dict)`
232
+ - **Example**: `filter_df_on_multiple_conditions(df, {'OBS': ' <= datetime(2023,1,23)', 'File_Name': ' == France media'})`
233
+
234
+ ---
235
+
236
+ ## 30. `read_and_concatenate_files`
237
+ - **Description**: Reads and concatenates all files of a specified type in a folder.
238
+ - **Usage**: `read_and_concatenate_files(folder_path, file_type='csv')`
239
+ - **Example**: `read_and_concatenate_files(folder_path, file_type='csv')`
240
+
241
+ ---
242
+
243
+ ## 31. `remove_zero_values`
244
+ - **Description**: Removes rows with zero values in a specified column.
245
+ - **Usage**: `remove_zero_values(data_frame, column_to_filter)`
246
+ - **Example**: `remove_zero_values(df, 'Funeral_Delivery')`
247
+
248
+ ---
249
+
250
+ ## 32. `upgrade_outdated_packages`
251
+ - **Description**: Upgrades all outdated packages in the environment.
252
+ - **Usage**: `upgrade_outdated_packages()`
253
+ - **Example**: `upgrade_outdated_packages()`
254
+
255
+ ---
256
+
257
+ ## 33. `convert_mixed_formats_dates`
258
+ - **Description**: Converts a mix of US and UK date formats to datetime.
259
+ - **Usage**: `convert_mixed_formats_dates(df, date_col)`
260
+ - **Example**: `convert_mixed_formats_dates(df, 'OBS')`
261
+
262
+ ---
263
+
264
+ ## 34. `fill_weekly_date_range`
265
+ - **Description**: Fills in missing weeks with zero values.
266
+ - **Usage**: `fill_weekly_date_range(df, date_column, freq)`
267
+ - **Example**: `fill_weekly_date_range(df, 'OBS', 'W-MON')`
268
+
269
+ ---
270
+
271
+ ## 35. `add_prefix_and_suffix`
272
+ - **Description**: Adds prefixes and/or suffixes to column headers.
273
+ - **Usage**: `add_prefix_and_suffix(df, prefix='', suffix='', date_col=None)`
274
+ - **Example**: `add_prefix_and_suffix(df, prefix='media_', suffix='_spd', date_col='obs')`
275
+
276
+ ---
277
+
278
+ ## 36. `create_dummies`
279
+ - **Description**: Converts time series into binary indicators based on a threshold.
280
+ - **Usage**: `create_dummies(df, date_col=None, dummy_threshold=0, add_total_dummy_col='No', total_col_name='total')`
281
+ - **Example**: `create_dummies(df, date_col='obs', dummy_threshold=100, add_total_dummy_col='Yes', total_col_name='med_total_dum')`
282
+
283
+ ---
284
+
285
+ ## 37. `replace_substrings`
286
+ - **Description**: Replaces substrings in a column of strings using a dictionary and can change column values to lowercase.
287
+ - **Usage**: `replace_substrings(df, column, replacements, to_lower=False, new_column=None)`
288
+ - **Example**: `replace_substrings(df, 'Influencer Handle', replacement_dict, to_lower=True, new_column='Short Version')`
289
+
290
+ ---
291
+
292
+ ## 38. `add_total_column`
293
+ - **Description**: Sums all columns (excluding a specified column) to create a total column.
294
+ - **Usage**: `add_total_column(df, exclude_col=None, total_col_name='Total')`
295
+ - **Example**: `add_total_column(df, exclude_col='obs', total_col_name='total_media_spd')`
296
+
297
+ ---
298
+
299
+ ## 39. `apply_lookup_table_based_on_substring`
300
+ - **Description**: Maps substrings in a column to values using a lookup dictionary.
301
+ - **Usage**: `apply_lookup_table_based_on_substring(df, column_name, category_dict, new_col_name='Category', other_label='Other')`
302
+ - **Example**: `apply_lookup_table_based_on_substring(df, 'Campaign Name', campaign_dict, new_col_name='Campaign Name Short', other_label='Full Funnel')`
303
+
304
+ ---
305
+
306
+ ## 40. `compare_overlap`
307
+ - **Description**: Compares matching rows and columns in two DataFrames and outputs the differences.
308
+ - **Usage**: `compare_overlap(df1, df2, date_col)`
309
+ - **Example**: `compare_overlap(df_1, df_2, 'obs')`
310
+
311
+ ---
312
+
313
+ ## 41. `week_commencing_2_week_commencing_conversion`
314
+ - **Description**: Converts a week commencing column to a different start day.
315
+ - **Usage**: `week_commencing_2_week_commencing_conversion(df, date_col, week_commencing='sun')`
316
+ - **Example**: `week_commencing_2_week_commencing_conversion(df, 'obs', week_commencing='mon')`
317
+
318
+ ---
319
+
320
+ ## 42. `plot_chart`
321
+ - **Description**: Plots various chart types including line, area, scatter, and bar.
322
+ - **Usage**: `plot_chart(df, date_col, value_cols, chart_type='line', title='Chart', x_title='Date', y_title='Values', **kwargs)`
323
+ - **Example**: `plot_chart(df, 'obs', df.cols, chart_type='line', title='Spend Over Time', x_title='Date', y_title='Spend')`
324
+
325
+ ---
326
+
327
+ ## 43. `plot_two_with_common_cols`
328
+ - **Description**: Plots charts for two DataFrames based on common column names.
329
+ - **Usage**: `plot_two_with_common_cols(df1, df2, date_column, same_axis=True)`
330
+ - **Example**: `plot_two_with_common_cols(df_1, df_2, date_column='obs')`
331
+
332
+ ---
333
+
334
+ ## Data Pulling
335
+
336
+ ## 1. `pull_fred_data`
337
+ - **Description**: Fetch data from FRED using series ID tokens.
338
+ - **Usage**: `pull_fred_data(week_commencing, series_id_list)`
339
+ - **Example**: `pull_fred_data('mon', ['GPDIC1', 'Y057RX1Q020SBEA', 'GCEC1', 'ND000333Q', 'Y006RX1Q020SBEA'])`
340
+
341
+ ---
342
+
343
+ ## 2. `pull_boe_data`
344
+ - **Description**: Fetch and process Bank of England interest rate data.
345
+ - **Usage**: `pull_boe_data(week_commencing)`
346
+ - **Example**: `pull_boe_data('mon')`
347
+
348
+ ---
349
+
350
+ ## 3. `pull_ons_data`
351
+ - **Description**: Fetch and process time series data from the ONS API.
352
+ - **Usage**: `pull_ons_data(series_list, week_commencing)`
353
+ - **Example**: `pull_ons_data([{'series_id': 'LMSBSA', 'dataset_id': 'LMS'}], 'mon')`
354
+
355
+ ---
356
+
357
+ ## 4. `pull_oecd`
358
+ - **Description**: Fetch macroeconomic data from OECD for a specified country.
359
+ - **Usage**: `pull_oecd(country='GBR', week_commencing='mon', start_date='1950-01-01')`
360
+ - **Example**: `pull_oecd('GBR', 'mon', '1950-01-01')`
361
+
362
+ ---
363
+
364
+ ## 5. `get_google_mobility_data`
365
+ - **Description**: Fetch Google Mobility data for the specified country.
366
+ - **Usage**: `get_google_mobility_data(country, wc)`
367
+ - **Example**: `get_google_mobility_data('United Kingdom', 'mon')`
368
+
369
+ ---
370
+
371
+ ## 6. `pull_combined_dummies`
372
+ - **Description**: Generate combined dummy variables for seasonality, trends, and COVID lockdowns.
373
+ - **Usage**: `pull_combined_dummies(week_commencing)`
374
+ - **Example**: `pull_combined_dummies('mon')`
375
+
376
+ ---
377
+
378
+ ## 7. `pull_weather`
379
+ - **Description**: Fetch and process historical weather data for the specified country.
380
+ - **Usage**: `pull_weather(week_commencing, country)`
381
+ - **Example**: `pull_weather('mon', 'GBR')`
382
+
383
+ ---
@@ -1,16 +1,17 @@
1
1
  dataprocessing/__init__.py,sha256=quSwsLs6IuLoA5Rzi0ZD40xZaQudwDteF7_ai9JfTPk,32
2
2
  dataprocessing/data-processing-functions.py,sha256=vE1vsZ8xOSbR9Bwlp9SWXwEHXQ0nFydwGkvzHXf2f1Y,41
3
3
  dataprocessing/datafunctions.py,sha256=vE1vsZ8xOSbR9Bwlp9SWXwEHXQ0nFydwGkvzHXf2f1Y,41
4
- imsciences/__init__.py,sha256=GIPbLmWc06sVcOySWwNvMNUr6XGOHqPLryFIWgtpHh8,78
4
+ imsciences/__init__.py,sha256=0IwH7R_2N8vimJJo2DLzIG1hq9ddn8gB6ijlLrQemZs,122
5
5
  imsciences/datafunctions-IMS-24Ltp-3.py,sha256=3Snv-0iE_03StmyjtT-riOU9f4v8TaJWLoyZLJp6l8Y,141406
6
- imsciences/datafunctions.py,sha256=gf_RuaQ64ygV9atcn_MGJjJAjTt5PgBQi1B-GFhmNYc,153114
6
+ imsciences/datafunctions.py,sha256=lvvodU8dZ9IN_GS7FYMuft9ZsQkD2BMIGQxLiN8GY7c,151557
7
7
  imsciences/datapull.py,sha256=TPY0LDgOkcKTBk8OekbD0Grg5x0SomAK2dZ7MuT6X1E,19000
8
+ imsciences/unittesting.py,sha256=d9H5HN8y7oof59hqN9mGqkjulExqFd93BEW-X8w_Id8,58142
8
9
  imsciencesdataprocessing/__init__.py,sha256=quSwsLs6IuLoA5Rzi0ZD40xZaQudwDteF7_ai9JfTPk,32
9
10
  imsciencesdataprocessing/datafunctions.py,sha256=vE1vsZ8xOSbR9Bwlp9SWXwEHXQ0nFydwGkvzHXf2f1Y,41
10
11
  imsdataprocessing/__init__.py,sha256=quSwsLs6IuLoA5Rzi0ZD40xZaQudwDteF7_ai9JfTPk,32
11
12
  imsdataprocessing/datafunctions.py,sha256=vE1vsZ8xOSbR9Bwlp9SWXwEHXQ0nFydwGkvzHXf2f1Y,41
12
- imsciences-0.6.3.0.dist-info/METADATA,sha256=weHmVNBR3_TL3KaYNEAmMCFl9xbpcIgdPl54FNv-s28,854
13
- imsciences-0.6.3.0.dist-info/PKG-INFO-IMS-24Ltp-3,sha256=yqZbigwHjnYoqyI81PGz_AeofRFfOrwH_Vyawyef-mg,854
14
- imsciences-0.6.3.0.dist-info/WHEEL,sha256=GJ7t_kWBFywbagK5eo9IoUwLW6oyOeTKmQ-9iHFVNxQ,92
15
- imsciences-0.6.3.0.dist-info/top_level.txt,sha256=hsENS-AlDVRh8tQJ6-426iUQlla9bPcGc0-UlFF0_iU,11
16
- imsciences-0.6.3.0.dist-info/RECORD,,
13
+ imsciences-0.6.3.2.dist-info/METADATA,sha256=k22-OJm6rdvDU7mubqDGW1K9Z-inek4VCQ4HdAw51cA,16981
14
+ imsciences-0.6.3.2.dist-info/PKG-INFO-IMS-24Ltp-3,sha256=yqZbigwHjnYoqyI81PGz_AeofRFfOrwH_Vyawyef-mg,854
15
+ imsciences-0.6.3.2.dist-info/WHEEL,sha256=ixB2d4u7mugx_bCBycvM9OzZ5yD7NmPXFRtKlORZS2Y,91
16
+ imsciences-0.6.3.2.dist-info/top_level.txt,sha256=hsENS-AlDVRh8tQJ6-426iUQlla9bPcGc0-UlFF0_iU,11
17
+ imsciences-0.6.3.2.dist-info/RECORD,,
@@ -1,5 +1,5 @@
1
1
  Wheel-Version: 1.0
2
- Generator: bdist_wheel (0.43.0)
2
+ Generator: setuptools (74.1.0)
3
3
  Root-Is-Purelib: true
4
4
  Tag: py3-none-any
5
5
 
@@ -1,24 +0,0 @@
1
- Metadata-Version: 2.1
2
- Name: imsciences
3
- Version: 0.6.3.0
4
- Summary: IMS Data Processing Package
5
- Author: IMS
6
- Author-email: cam@im-sciences.com
7
- Keywords: python,data processing
8
- Classifier: Development Status :: 3 - Alpha
9
- Classifier: Intended Audience :: Developers
10
- Classifier: Programming Language :: Python :: 3
11
- Classifier: Operating System :: Unix
12
- Classifier: Operating System :: MacOS :: MacOS X
13
- Classifier: Operating System :: Microsoft :: Windows
14
- Description-Content-Type: text/markdown
15
- Requires-Dist: pandas
16
-
17
- # IMS Package Documentation
18
-
19
- The IMS package is a python library for processing incoming data into a format that can be used for projects. IMS processing offers a variety of functions to manipulate and analyze data efficiently. Here are the functionalities provided by the package:
20
-
21
- ## Data Processing
22
-
23
- ## Data Pulling
24
-