python-ubercode-utils 1.0.10__tar.gz → 2.0.3__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- python_ubercode_utils-2.0.3/PKG-INFO +29 -0
- python_ubercode_utils-2.0.3/python_ubercode_utils.egg-info/PKG-INFO +29 -0
- {python-ubercode-utils-1.0.10 → python_ubercode_utils-2.0.3}/setup.py +2 -2
- {python-ubercode-utils-1.0.10 → python_ubercode_utils-2.0.3}/test/test_data.py +16 -18
- {python-ubercode-utils-1.0.10 → python_ubercode_utils-2.0.3}/ubercode/utils/convert.py +28 -0
- {python-ubercode-utils-1.0.10 → python_ubercode_utils-2.0.3}/ubercode/utils/data.py +17 -26
- {python-ubercode-utils-1.0.10 → python_ubercode_utils-2.0.3}/ubercode/utils/dataframe.py +19 -0
- {python-ubercode-utils-1.0.10 → python_ubercode_utils-2.0.3}/ubercode/utils/environment.py +59 -0
- {python-ubercode-utils-1.0.10 → python_ubercode_utils-2.0.3}/ubercode/utils/logging.py +1 -1
- python-ubercode-utils-1.0.10/PKG-INFO +0 -29
- python-ubercode-utils-1.0.10/python_ubercode_utils.egg-info/PKG-INFO +0 -29
- {python-ubercode-utils-1.0.10 → python_ubercode_utils-2.0.3}/LICENSE +0 -0
- {python-ubercode-utils-1.0.10 → python_ubercode_utils-2.0.3}/MANIFEST.in +0 -0
- {python-ubercode-utils-1.0.10 → python_ubercode_utils-2.0.3}/README.md +0 -0
- {python-ubercode-utils-1.0.10 → python_ubercode_utils-2.0.3}/python_ubercode_utils.egg-info/SOURCES.txt +0 -0
- {python-ubercode-utils-1.0.10 → python_ubercode_utils-2.0.3}/python_ubercode_utils.egg-info/dependency_links.txt +0 -0
- {python-ubercode-utils-1.0.10 → python_ubercode_utils-2.0.3}/python_ubercode_utils.egg-info/not-zip-safe +0 -0
- {python-ubercode-utils-1.0.10 → python_ubercode_utils-2.0.3}/python_ubercode_utils.egg-info/top_level.txt +0 -0
- {python-ubercode-utils-1.0.10 → python_ubercode_utils-2.0.3}/setup.cfg +0 -0
- {python-ubercode-utils-1.0.10 → python_ubercode_utils-2.0.3}/test/test_convert.py +0 -0
- {python-ubercode-utils-1.0.10 → python_ubercode_utils-2.0.3}/test/test_cursor.py +0 -0
- {python-ubercode-utils-1.0.10 → python_ubercode_utils-2.0.3}/test/test_dataframe.py +0 -0
- {python-ubercode-utils-1.0.10 → python_ubercode_utils-2.0.3}/test/test_environment.py +0 -0
- {python-ubercode-utils-1.0.10 → python_ubercode_utils-2.0.3}/test/test_logging.py +0 -0
- {python-ubercode-utils-1.0.10 → python_ubercode_utils-2.0.3}/test/test_urls.py +0 -0
- {python-ubercode-utils-1.0.10 → python_ubercode_utils-2.0.3}/ubercode/__init__.py +0 -0
- {python-ubercode-utils-1.0.10 → python_ubercode_utils-2.0.3}/ubercode/utils/__init__.py +0 -0
- {python-ubercode-utils-1.0.10 → python_ubercode_utils-2.0.3}/ubercode/utils/cursor.py +0 -0
- {python-ubercode-utils-1.0.10 → python_ubercode_utils-2.0.3}/ubercode/utils/urls.py +0 -0
|
@@ -0,0 +1,29 @@
|
|
|
1
|
+
Metadata-Version: 2.1
|
|
2
|
+
Name: python_ubercode_utils
|
|
3
|
+
Version: 2.0.3
|
|
4
|
+
Summary: Core python utilities for all apps
|
|
5
|
+
Home-page: https://github.com/sstacha/python-ubercode-utils
|
|
6
|
+
Author: Steve Stacha
|
|
7
|
+
Author-email: sstacha@gmail.com
|
|
8
|
+
License: MIT
|
|
9
|
+
Classifier: Development Status :: 3 - Alpha
|
|
10
|
+
Classifier: Programming Language :: Python :: 3
|
|
11
|
+
Classifier: License :: OSI Approved :: MIT License
|
|
12
|
+
Classifier: Operating System :: OS Independent
|
|
13
|
+
Classifier: Topic :: Utilities
|
|
14
|
+
Requires-Python: >=3.8
|
|
15
|
+
Description-Content-Type: text/markdown
|
|
16
|
+
License-File: LICENSE
|
|
17
|
+
|
|
18
|
+
# python-ubercode-utils
|
|
19
|
+
Extracting common python utilities re-used between all projects. The intent is to have minimal dependencies
|
|
20
|
+
so the library can be used by django settings without circular references. I also have color logging class for
|
|
21
|
+
jupyter notebooks. I will have a couple of libraries that will extend this functionality. Scan the test cases in the
|
|
22
|
+
tests folder for common use cases.
|
|
23
|
+
|
|
24
|
+
python-utils-core:
|
|
25
|
+
- basic conversion helper utilities
|
|
26
|
+
- color logging without dependencies
|
|
27
|
+
- manipulating urls and their parameters
|
|
28
|
+
- helper classes to make working with xml and json data easier
|
|
29
|
+
- minimal helper classes to convert database cursor results to dictionaries or tuples
|
|
@@ -0,0 +1,29 @@
|
|
|
1
|
+
Metadata-Version: 2.1
|
|
2
|
+
Name: python-ubercode-utils
|
|
3
|
+
Version: 2.0.3
|
|
4
|
+
Summary: Core python utilities for all apps
|
|
5
|
+
Home-page: https://github.com/sstacha/python-ubercode-utils
|
|
6
|
+
Author: Steve Stacha
|
|
7
|
+
Author-email: sstacha@gmail.com
|
|
8
|
+
License: MIT
|
|
9
|
+
Classifier: Development Status :: 3 - Alpha
|
|
10
|
+
Classifier: Programming Language :: Python :: 3
|
|
11
|
+
Classifier: License :: OSI Approved :: MIT License
|
|
12
|
+
Classifier: Operating System :: OS Independent
|
|
13
|
+
Classifier: Topic :: Utilities
|
|
14
|
+
Requires-Python: >=3.8
|
|
15
|
+
Description-Content-Type: text/markdown
|
|
16
|
+
License-File: LICENSE
|
|
17
|
+
|
|
18
|
+
# python-ubercode-utils
|
|
19
|
+
Extracting common python utilities re-used between all projects. The intent is to have minimal dependencies
|
|
20
|
+
so the library can be used by django settings without circular references. I also have color logging class for
|
|
21
|
+
jupyter notebooks. I will have a couple of libraries that will extend this functionality. Scan the test cases in the
|
|
22
|
+
tests folder for common use cases.
|
|
23
|
+
|
|
24
|
+
python-utils-core:
|
|
25
|
+
- basic conversion helper utilities
|
|
26
|
+
- color logging without dependencies
|
|
27
|
+
- manipulating urls and their parameters
|
|
28
|
+
- helper classes to make working with xml and json data easier
|
|
29
|
+
- minimal helper classes to convert database cursor results to dictionaries or tuples
|
|
@@ -3,8 +3,8 @@ import setuptools
|
|
|
3
3
|
with open("README.md", "r") as fh:
|
|
4
4
|
long_description = fh.read()
|
|
5
5
|
|
|
6
|
-
setuptools.setup(name='
|
|
7
|
-
version='
|
|
6
|
+
setuptools.setup(name='python_ubercode_utils',
|
|
7
|
+
version='2.0.3',
|
|
8
8
|
description='Core python utilities for all apps',
|
|
9
9
|
long_description=long_description,
|
|
10
10
|
long_description_content_type="text/markdown",
|
|
@@ -1,11 +1,11 @@
|
|
|
1
1
|
import unittest
|
|
2
2
|
from pathlib import Path
|
|
3
3
|
|
|
4
|
-
from ubercode.utils.data import
|
|
5
|
-
from ubercode.utils.data import
|
|
4
|
+
from ubercode.utils.data import JsonData
|
|
5
|
+
from ubercode.utils.data import XmlData
|
|
6
6
|
|
|
7
7
|
|
|
8
|
-
class
|
|
8
|
+
class TestJsonData(unittest.TestCase):
|
|
9
9
|
|
|
10
10
|
# -------- common usages ----------
|
|
11
11
|
def test_JSON(self):
|
|
@@ -48,13 +48,11 @@ class TestJSON(unittest.TestCase):
|
|
|
48
48
|
}
|
|
49
49
|
"""
|
|
50
50
|
# test we can construct from a json string
|
|
51
|
-
json =
|
|
52
|
-
self.assertEqual(len(json.
|
|
51
|
+
json = JsonData(json_string=json_string)
|
|
52
|
+
self.assertEqual(len(json.data['people']), 3)
|
|
53
53
|
# test we can construct by chaining and reading file
|
|
54
|
-
json2 =
|
|
55
|
-
self.assertEqual(json.
|
|
56
|
-
# test the dict matches the to_dict() result
|
|
57
|
-
self.assertEqual(json.json_dict, json.to_dict())
|
|
54
|
+
json2 = JsonData().from_json_file(str(file_path))
|
|
55
|
+
self.assertEqual(json.data, json2.data)
|
|
58
56
|
# test encoding
|
|
59
57
|
json_string = """
|
|
60
58
|
{
|
|
@@ -91,19 +89,19 @@ class TestJSON(unittest.TestCase):
|
|
|
91
89
|
]
|
|
92
90
|
}
|
|
93
91
|
"""
|
|
94
|
-
json =
|
|
95
|
-
self.assertEqual(len(json.
|
|
96
|
-
first_name = json.
|
|
92
|
+
json = JsonData(json_string=json_string, encode_ampersands=True)
|
|
93
|
+
self.assertEqual(len(json.data['people']), 3)
|
|
94
|
+
first_name = json.data['people'][0]['firstName']
|
|
97
95
|
self.assertEqual(first_name, "Joe & Baker")
|
|
98
96
|
# make sure the second name isn't double encoded
|
|
99
|
-
second_name = json.
|
|
97
|
+
second_name = json.data['people'][1]['firstName']
|
|
100
98
|
self.assertEqual(second_name, "James &")
|
|
101
99
|
# test the str function
|
|
102
100
|
result = "{'people': [{'firstName': 'Joe & Baker', 'lastName': 'Jackson', 'gender': 'male', 'age': 28, 'number': '7349282382', 'groups': ['members', 'student']}, {'firstName': 'James &', 'lastName': 'Smith', 'gender': 'male', 'age': 32, 'number': '5678568567', 'groups': ['members', 'professional']}, {'firstName': 'Emily', 'lastName': 'Jones', 'gender': 'female', 'age': 24, 'number': '456754675'}]}"
|
|
103
101
|
self.assertEqual(str(json), result)
|
|
104
102
|
|
|
105
103
|
|
|
106
|
-
class
|
|
104
|
+
class TestXmlData(unittest.TestCase):
|
|
107
105
|
|
|
108
106
|
# -------- common usages ----------
|
|
109
107
|
def test_XML(self):
|
|
@@ -123,15 +121,15 @@ class TestXML(unittest.TestCase):
|
|
|
123
121
|
</contacts>
|
|
124
122
|
"""
|
|
125
123
|
# test we can construct from an xml string
|
|
126
|
-
xml =
|
|
124
|
+
xml = XmlData(xml_string=xml_string)
|
|
127
125
|
# NOTE: because we used a multiline string we need to strip the extra newlines before and after <contacts>
|
|
128
126
|
self.assertEqual(str(xml), xml_string.strip())
|
|
129
127
|
# normal string doesn't need stripping
|
|
130
128
|
xml_compact_string = "<contacts><contact><name>Buggs Bunny</name></contact><contact><name>Daffy Duck</name></contact></contacts>"
|
|
131
|
-
xml2 =
|
|
129
|
+
xml2 = XmlData(xml_compact_string)
|
|
132
130
|
self.assertEqual(str(xml2), xml_compact_string)
|
|
133
131
|
# test we can create using the from_xml_string() method chaining
|
|
134
|
-
xml3 =
|
|
132
|
+
xml3 = XmlData().from_xml_string(xml_compact_string)
|
|
135
133
|
self.assertEqual(str(xml2), str(xml3))
|
|
136
134
|
# test that method chaining after constructor overrides the value in place
|
|
137
135
|
self.assertNotEqual(str(xml), str(xml2))
|
|
@@ -155,7 +153,7 @@ class TestXML(unittest.TestCase):
|
|
|
155
153
|
</contact>
|
|
156
154
|
</contacts>
|
|
157
155
|
"""
|
|
158
|
-
xml =
|
|
156
|
+
xml = XmlData(xml_string=xml_string, encode_ampersands=True)
|
|
159
157
|
xml_dict = xml.to_dict()
|
|
160
158
|
self.assertEqual(xml_dict['contacts']['contact'][0]['@attr'], '1')
|
|
161
159
|
|
|
@@ -250,3 +250,31 @@ def to_mask(value: str or None) -> str or None:
|
|
|
250
250
|
_mask += value[-_iqtr:]
|
|
251
251
|
return _mask
|
|
252
252
|
|
|
253
|
+
def obj_to_str(obj, property_filter_list=None):
|
|
254
|
+
"""
|
|
255
|
+
Mostly used for debugging. Very useful to print the properties of an object on a line; condensing reasonably
|
|
256
|
+
|
|
257
|
+
:param obj: the object to inspect properties for
|
|
258
|
+
:param property_filter_list: any property names we want to omit
|
|
259
|
+
:return: a string containing the outputted properties
|
|
260
|
+
"""
|
|
261
|
+
attbuf = ""
|
|
262
|
+
for key, value in vars(obj).items():
|
|
263
|
+
if property_filter_list and key in property_filter_list:
|
|
264
|
+
continue
|
|
265
|
+
if not key.startswith('__'):
|
|
266
|
+
if len(attbuf) > 0:
|
|
267
|
+
attbuf += ", "
|
|
268
|
+
# show the first 50 chars and last 25 chars
|
|
269
|
+
this_content = str(value)
|
|
270
|
+
this_content = this_content.replace('\n', ' ').replace('\r', '').strip()
|
|
271
|
+
if this_content:
|
|
272
|
+
if len(this_content) > 150:
|
|
273
|
+
attbuf += str(key) + ": [" + this_content[0:25] + " ... " + this_content[
|
|
274
|
+
len(this_content) - 25:len(
|
|
275
|
+
this_content)] + "]"
|
|
276
|
+
else:
|
|
277
|
+
attbuf += str(key) + ": " + this_content or ""
|
|
278
|
+
else:
|
|
279
|
+
attbuf += str(key) + ": " + this_content or ""
|
|
280
|
+
return "[" + attbuf + "]"
|
|
@@ -9,12 +9,11 @@ import xml.etree.ElementTree as Etree
|
|
|
9
9
|
from collections import defaultdict
|
|
10
10
|
|
|
11
11
|
|
|
12
|
-
class
|
|
12
|
+
class JsonData:
|
|
13
13
|
""" simple json class to encapsulate basic json operations """
|
|
14
|
-
# the base implementation will be dict
|
|
15
|
-
json_dict = {}
|
|
16
|
-
|
|
17
14
|
def __init__(self, json_string: str or None = None, encode_ampersands: bool = False):
|
|
15
|
+
# data is core python objects (list, dict, object, etc) from the core python JSON.loads
|
|
16
|
+
self.data = None
|
|
18
17
|
self.encode_ampersands = encode_ampersands
|
|
19
18
|
self.from_json_string(json_string)
|
|
20
19
|
|
|
@@ -28,7 +27,7 @@ class JSON:
|
|
|
28
27
|
if self.encode_ampersands:
|
|
29
28
|
regex = re.compile(r"&(?!amp;|lt;|gt;)")
|
|
30
29
|
json_string = regex.sub("&", json_string)
|
|
31
|
-
self.
|
|
30
|
+
self.data = json.loads(json_string)
|
|
32
31
|
return self
|
|
33
32
|
|
|
34
33
|
def from_json_file(self, json_file_path: str):
|
|
@@ -43,26 +42,18 @@ class JSON:
|
|
|
43
42
|
if self.encode_ampersands:
|
|
44
43
|
regex = re.compile(r"&(?!amp;|lt;|gt;)")
|
|
45
44
|
json_string = regex.sub("&", json_string)
|
|
46
|
-
self.
|
|
45
|
+
self.data = json.loads(json_string)
|
|
47
46
|
return self
|
|
48
47
|
|
|
49
|
-
def to_dict(self) -> dict:
|
|
50
|
-
"""
|
|
51
|
-
output to dict
|
|
52
|
-
:return: dict
|
|
53
|
-
"""
|
|
54
|
-
return self.json_dict
|
|
55
|
-
|
|
56
48
|
def __str__(self):
|
|
57
|
-
return str(self.
|
|
58
|
-
|
|
49
|
+
return str(self.data)
|
|
59
50
|
|
|
60
|
-
class XML:
|
|
61
|
-
""" simple xml class to encapsulate basic xml operations """
|
|
62
|
-
# the base implementation will be Etree from the base python lib
|
|
63
|
-
xml_tree = None
|
|
64
51
|
|
|
52
|
+
class XmlData:
|
|
53
|
+
""" simple xml class to encapsulate basic xml operations using build in python ETree """
|
|
65
54
|
def __init__(self, xml_string: str or None = None, encode_ampersands: bool = False):
|
|
55
|
+
# data is core python ElementTree object
|
|
56
|
+
self.data = None
|
|
66
57
|
self.encode_ampersands = encode_ampersands
|
|
67
58
|
self.from_xml_string(xml_string)
|
|
68
59
|
|
|
@@ -76,7 +67,7 @@ class XML:
|
|
|
76
67
|
if self.encode_ampersands:
|
|
77
68
|
regex = re.compile(r"&(?!amp;|lt;|gt;)")
|
|
78
69
|
xml_string = regex.sub("&", xml_string)
|
|
79
|
-
self.
|
|
70
|
+
self.data = Etree.fromstring(xml_string)
|
|
80
71
|
return self
|
|
81
72
|
|
|
82
73
|
def from_xml_file(self, xml_file_path: str):
|
|
@@ -96,7 +87,7 @@ class XML:
|
|
|
96
87
|
else:
|
|
97
88
|
tree = Etree.parse(xml_file_path)
|
|
98
89
|
tree = tree.getroot()
|
|
99
|
-
self.
|
|
90
|
+
self.data = tree
|
|
100
91
|
return self
|
|
101
92
|
|
|
102
93
|
def to_dict(self) -> dict:
|
|
@@ -104,10 +95,10 @@ class XML:
|
|
|
104
95
|
output to dict
|
|
105
96
|
:return: dict
|
|
106
97
|
"""
|
|
107
|
-
return
|
|
98
|
+
return XmlData.tree_to_dict(self.data)
|
|
108
99
|
|
|
109
100
|
@staticmethod
|
|
110
|
-
def tree_to_dict(t) -> dict:
|
|
101
|
+
def tree_to_dict(t: Etree) -> dict:
|
|
111
102
|
"""
|
|
112
103
|
Convert an etree structure to a dictionary of values
|
|
113
104
|
:param t: etree instance
|
|
@@ -117,7 +108,7 @@ class XML:
|
|
|
117
108
|
children = list(t)
|
|
118
109
|
if children:
|
|
119
110
|
dd = defaultdict(list)
|
|
120
|
-
for dc in map(
|
|
111
|
+
for dc in map(XmlData.tree_to_dict, children):
|
|
121
112
|
for k, v in dc.items():
|
|
122
113
|
dd[k].append(v)
|
|
123
114
|
d = {t.tag: {k: v[0] if len(v) == 1 else v for k, v in dd.items()}}
|
|
@@ -133,6 +124,6 @@ class XML:
|
|
|
133
124
|
return d
|
|
134
125
|
|
|
135
126
|
def __str__(self):
|
|
136
|
-
if self.
|
|
137
|
-
return Etree.tostring(self.
|
|
127
|
+
if self.data:
|
|
128
|
+
return Etree.tostring(self.data, encoding='unicode')
|
|
138
129
|
return ""
|
|
@@ -1,7 +1,26 @@
|
|
|
1
1
|
""" common utilities for working with dataframes"""
|
|
2
2
|
from typing import Any
|
|
3
3
|
from . import logging
|
|
4
|
+
from datetime import datetime
|
|
4
5
|
|
|
6
|
+
default_date_formats = {
|
|
7
|
+
'date': '%Y-%m-%d',
|
|
8
|
+
'datetime': '%Y-%m-%d %H:%M:%S',
|
|
9
|
+
'datetimemilli': '%Y-%m-%d %H:%M:%S.%f'
|
|
10
|
+
}
|
|
11
|
+
|
|
12
|
+
def to_date_str(date_string: str or None, date_col: str, date_field_map: dict, date_formats: dict) -> str or None:
|
|
13
|
+
if date_string == 'None' or date_string == 'NaT' or not date_string:
|
|
14
|
+
return None
|
|
15
|
+
if '.' in date_string:
|
|
16
|
+
dt = datetime.strptime(date_string, date_formats['datetimemilli'])
|
|
17
|
+
elif ':' in date_string:
|
|
18
|
+
dt = datetime.strptime(date_string, date_formats['datetime'])
|
|
19
|
+
else:
|
|
20
|
+
dt = datetime.strptime(date_string, date_formats['date'])
|
|
21
|
+
if not dt:
|
|
22
|
+
return None
|
|
23
|
+
return dt.strftime(date_formats[date_field_map[date_col]])
|
|
5
24
|
|
|
6
25
|
# extend the logging to include log.dataframe()
|
|
7
26
|
# NOTE: making dataframe type Any, so we don't have to include pandas but intended use is dataframe
|
|
@@ -6,6 +6,7 @@ import os
|
|
|
6
6
|
import time
|
|
7
7
|
from datetime import datetime
|
|
8
8
|
from typing import Any, Tuple
|
|
9
|
+
from pathlib import Path
|
|
9
10
|
from ubercode.utils.logging import ColorLogger
|
|
10
11
|
from ubercode.utils import convert
|
|
11
12
|
|
|
@@ -223,3 +224,61 @@ class Environment:
|
|
|
223
224
|
self._logger.warn(
|
|
224
225
|
f"{db_parts[0]}[{db_parts[1]}][{db_parts[2]}] has a database or property naming issue!")
|
|
225
226
|
return db_dict
|
|
227
|
+
|
|
228
|
+
class FauxApp:
|
|
229
|
+
def __init__(self, logger: ColorLogger = None, notebook_path: Path = Path(), default_dict: dict = None) -> None:
|
|
230
|
+
self._logger = logger if logger else _utils_settings_logger
|
|
231
|
+
self.notebook_path = notebook_path.resolve()
|
|
232
|
+
self.app_path = os.path.dirname(self.notebook_path)
|
|
233
|
+
self.project_path = os.path.dirname(self.app_path)
|
|
234
|
+
self.instance_path = os.path.join(self.project_path, 'instance')
|
|
235
|
+
self.config = default_dict or dict(
|
|
236
|
+
SECRET_KEY = 'localmachine',
|
|
237
|
+
LOG_LEVEL = 'DEBUG',
|
|
238
|
+
DEBUG = True,
|
|
239
|
+
APP_DIR = self.app_path,
|
|
240
|
+
PROJECT_DIR = self.project_path,
|
|
241
|
+
DATABASE_DEBUG = False,
|
|
242
|
+
SA_URL_APP = f'sqlite+pysqlite:///{os.path.join(self.instance_path, "nbsync.sqlite3")}',
|
|
243
|
+
SA_URL_SRC_LOCAL = f'sqlite+pysqlite:///{os.path.join(self.instance_path, "src.sqlite3")}',
|
|
244
|
+
SA_URL_DST_LOCAL = f'sqlite+pysqlite:///{os.path.join(self.instance_path, "dst.sqlite3")}',
|
|
245
|
+
)
|
|
246
|
+
|
|
247
|
+
def from_mapping(self, mapping: dict) -> None:
|
|
248
|
+
self.config = self.config | mapping
|
|
249
|
+
|
|
250
|
+
def from_pyfile(self, config_file: str = '~/conf/nbsync.cfg') -> None:
|
|
251
|
+
# read the config file into dict if exists then merge
|
|
252
|
+
abs_cfg = os.path.expanduser(config_file)
|
|
253
|
+
try:
|
|
254
|
+
with open(abs_cfg, 'r') as fp:
|
|
255
|
+
for line in fp:
|
|
256
|
+
line = line.strip()
|
|
257
|
+
if line.startswith('#') or not line:
|
|
258
|
+
continue
|
|
259
|
+
# Split only on the first '=' to allow '=' in the value
|
|
260
|
+
try:
|
|
261
|
+
key, val = line.split('=', 1)
|
|
262
|
+
self.config[key.strip().strip("'").strip('"')] = val.strip().strip("'").strip('"')
|
|
263
|
+
except ValueError:
|
|
264
|
+
# Handle lines that might not have an '='
|
|
265
|
+
continue
|
|
266
|
+
except FileNotFoundError:
|
|
267
|
+
self._logger.debug(f'[{config_file}] does not exist')
|
|
268
|
+
|
|
269
|
+
def from_prefixed_env(self, prefix: str = 'UC'):
|
|
270
|
+
# read environment variables with the given prefix and merge into config
|
|
271
|
+
prefix_len = len(prefix) + 1 # +1 for the underscore
|
|
272
|
+
for key, value in os.environ.items():
|
|
273
|
+
if key.startswith(f'{prefix}_'):
|
|
274
|
+
config_key = key[prefix_len:] # remove the prefix and underscore
|
|
275
|
+
self.config[config_key] = value
|
|
276
|
+
# lastly, attempt to convert 'true'/'false' to boolean
|
|
277
|
+
if value.lower() == 'true':
|
|
278
|
+
self.config[config_key] = True
|
|
279
|
+
elif value.lower() == 'false':
|
|
280
|
+
self.config[config_key] = False
|
|
281
|
+
|
|
282
|
+
def __repr__(self):
|
|
283
|
+
return convert.obj_to_str(self)
|
|
284
|
+
|
|
@@ -183,7 +183,7 @@ class ColorLogger:
|
|
|
183
183
|
c_msg = str(msg)
|
|
184
184
|
if self.color_output and color:
|
|
185
185
|
c_msg = color + c_msg + TermColor.ENDC
|
|
186
|
-
if msg == self.repeat_msg:
|
|
186
|
+
if str(msg) == self.repeat_msg:
|
|
187
187
|
# the first time we start repeating track the indent level
|
|
188
188
|
if not self.repeat_cnt:
|
|
189
189
|
if indent is not None:
|
|
@@ -1,29 +0,0 @@
|
|
|
1
|
-
Metadata-Version: 2.1
|
|
2
|
-
Name: python-ubercode-utils
|
|
3
|
-
Version: 1.0.10
|
|
4
|
-
Summary: Core python utilities for all apps
|
|
5
|
-
Home-page: https://github.com/sstacha/python-ubercode-utils
|
|
6
|
-
Author: Steve Stacha
|
|
7
|
-
Author-email: sstacha@gmail.com
|
|
8
|
-
License: MIT
|
|
9
|
-
Description: # python-ubercode-utils
|
|
10
|
-
Extracting common python utilities re-used between all projects. The intent is to have minimal dependencies
|
|
11
|
-
so the library can be used by django settings without circular references. I also have color logging class for
|
|
12
|
-
jupyter notebooks. I will have a couple of libraries that will extend this functionality. Scan the test cases in the
|
|
13
|
-
tests folder for common use cases.
|
|
14
|
-
|
|
15
|
-
python-utils-core:
|
|
16
|
-
- basic conversion helper utilities
|
|
17
|
-
- color logging without dependencies
|
|
18
|
-
- manipulating urls and their parameters
|
|
19
|
-
- helper classes to make working with xml and json data easier
|
|
20
|
-
- minimal helper classes to convert database cursor results to dictionaries or tuples
|
|
21
|
-
|
|
22
|
-
Platform: UNKNOWN
|
|
23
|
-
Classifier: Development Status :: 3 - Alpha
|
|
24
|
-
Classifier: Programming Language :: Python :: 3
|
|
25
|
-
Classifier: License :: OSI Approved :: MIT License
|
|
26
|
-
Classifier: Operating System :: OS Independent
|
|
27
|
-
Classifier: Topic :: Utilities
|
|
28
|
-
Requires-Python: >=3.8
|
|
29
|
-
Description-Content-Type: text/markdown
|
|
@@ -1,29 +0,0 @@
|
|
|
1
|
-
Metadata-Version: 2.1
|
|
2
|
-
Name: python-ubercode-utils
|
|
3
|
-
Version: 1.0.10
|
|
4
|
-
Summary: Core python utilities for all apps
|
|
5
|
-
Home-page: https://github.com/sstacha/python-ubercode-utils
|
|
6
|
-
Author: Steve Stacha
|
|
7
|
-
Author-email: sstacha@gmail.com
|
|
8
|
-
License: MIT
|
|
9
|
-
Description: # python-ubercode-utils
|
|
10
|
-
Extracting common python utilities re-used between all projects. The intent is to have minimal dependencies
|
|
11
|
-
so the library can be used by django settings without circular references. I also have color logging class for
|
|
12
|
-
jupyter notebooks. I will have a couple of libraries that will extend this functionality. Scan the test cases in the
|
|
13
|
-
tests folder for common use cases.
|
|
14
|
-
|
|
15
|
-
python-utils-core:
|
|
16
|
-
- basic conversion helper utilities
|
|
17
|
-
- color logging without dependencies
|
|
18
|
-
- manipulating urls and their parameters
|
|
19
|
-
- helper classes to make working with xml and json data easier
|
|
20
|
-
- minimal helper classes to convert database cursor results to dictionaries or tuples
|
|
21
|
-
|
|
22
|
-
Platform: UNKNOWN
|
|
23
|
-
Classifier: Development Status :: 3 - Alpha
|
|
24
|
-
Classifier: Programming Language :: Python :: 3
|
|
25
|
-
Classifier: License :: OSI Approved :: MIT License
|
|
26
|
-
Classifier: Operating System :: OS Independent
|
|
27
|
-
Classifier: Topic :: Utilities
|
|
28
|
-
Requires-Python: >=3.8
|
|
29
|
-
Description-Content-Type: text/markdown
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|