dstklib 1.0.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,360 @@
1
+ Metadata-Version: 2.1
2
+ Name: dstklib
3
+ Version: 1.0.0
4
+ Requires-Python: ==3.11
5
+ Description-Content-Type: text/markdown
6
+ License-File: LICENSE
7
+
8
+ # Distributional Semantics Toolkit
9
+
10
+ This library is based on the book *Distributional Semantics* by Alessandro Lenci and Magnus Sahlgren. It attempts to incorporate some of the algorithms described in the book, commonly used in distributional semantics.
11
+
12
+ ## Table of Contents
13
+
14
+ 1. [Introduction](#introduction)
15
+ 2. [Installation](#installation)
16
+ 3. [Usage](#usage)
17
+ 4. [Algorithms](#algorithms)
18
+ 5. [Contributing](#contributing)
19
+ 6. [License](#license)
20
+ 7. [Current Status](#current-status)
21
+
22
+ ## Introduction
23
+
24
+ The toolkit provides a set of classes and methods for conducting research in distributional semantics. It groups its methods by the common tasks followed in distributional semantics. Each task has its own substages, which should be followed in order. For the list of tasks see [Algorithms](#algorithms). Fore more information about the different tasks consult the book *Distributional Semantics* by Alessandro Lenci and Magnus Sahlgren
25
+
26
+ ## Installation
27
+
28
+ To install it just run the command:
29
+
30
+ ```bash
31
+ pip install dstklib
32
+ ```
33
+
34
+ DSTK requires python 3.11 to work.
35
+
36
+ # Usage
37
+
38
+ The library can be used in three modes:
39
+
40
+ ## Standalone mode
41
+
42
+ In standalone mode you can use the methods individually. Just select the class that contains the method you want to use (without passing any argument) and select the
43
+ method:
44
+
45
+ ```python
46
+ from dstk import TextProcessor
47
+
48
+ tokens = ["The", "Quick", "Brown", "Fox", "Jumps", "Over", "The", "Lazy", "Dog"]
49
+
50
+ # Do not pass any argument to the class. Doing it will activate workflow mode.
51
+ # Also, arguments must be keyword arguments. Positional arguments are not supported for methods.
52
+ lower_tokens = TextProcessor().to_lower(tokens=tokens)
53
+
54
+ print(lower_tokens)
55
+
56
+ # Output: ["the", "quick", "brown", "fox", "jumps", "over", "the", "lazy", "dog"]
57
+ ```
58
+
59
+ ## Workflow mode
60
+
61
+ In workflow mode you can chain the desired methods (as long as they followed the order of the stages in which they can be used) and then just call 'result'. To use workflow mode just do:
62
+
63
+ ```python
64
+ from dstk import TextProcessor
65
+
66
+ text = "The quick brown fox jumps over the lazy dog while the sun sets behind the hills."
67
+ model = "my_spacy_model"
68
+
69
+ # Calling result is important. Otherwise, it will return an instance of the class
70
+ tokens = TextProcessor(text=text).set_model(model=model).get_tokens().remove_stop_words().get_text().result
71
+
72
+ print(tokens)
73
+
74
+ # Output: ["quick", "brown", "fox", "jumps", "lazy", "dog", "sun", "sets", "behind", "hills"]
75
+ ```
76
+
77
+ ### Automate workflows:
78
+
79
+ If there is a specific workflow you use multiple times, you can automate it by using WorkflowBuilder. Just input the name of the methods (in the correct order) you use and its correspondent arguments as a dictionary, along witu the class you are using:
80
+
81
+ ```python
82
+ from dstk import TextProcessor, WorkflowBuilder
83
+
84
+ text = "The quick brown fox jumps over the lazy dog while the sun sets behind the hills."
85
+ model = "my_spacy_model"
86
+
87
+ CustomTextWorkflow = WorkflowBuilder(
88
+ work_class=TextProcessor,
89
+ method_representation={
90
+ "set_model": {"model": model},
91
+ "get_tokens": {},
92
+ "remove_stop_words": {},
93
+ "get_text": {}
94
+ }
95
+ )
96
+
97
+ # Pass as an argument the input required by the class
98
+ tokens = CustomWorkflow(text=text)
99
+
100
+ print(tokens)
101
+
102
+ # Output: ["quick", "brown", "fox", "jumps", "lazy", "dog", "sun", "sets", "behind", "hills"]
103
+ ```
104
+
105
+ ## Pipeline mode
106
+
107
+ A pipeline is just a set of workflows running one after another. If there are a lot of workflows that you constantly use, you can automate the process by using PipelineBuilder. Just pass your workflows as a list:
108
+
109
+ ```python
110
+ from dstk import TextProcessor, Collocations, WorkflowBuilder, PipelineBuilder
111
+
112
+ text = "The quick brown fox jumps over the lazy dog while the sun sets behind the hills."
113
+
114
+ CustomTextWorkflow = WorkflowBuilder(
115
+ work_class=TextProcessor,
116
+ method_representation={
117
+ "set_model": {"model": model},
118
+ "get_tokens": {},
119
+ "remove_stop_words": {},
120
+ "get_text": {}
121
+ }
122
+ )
123
+
124
+ CustomCollocationsWorkflow = WorkflowBuilder(
125
+ work_class=Collocations,
126
+ method_representation={
127
+ "extract_ngrams": {"target_word": "fox", "window_size": [2, 2]},
128
+ "count_collocates": {}
129
+ }
130
+ )
131
+
132
+ CustomPipeline = PipelineBuilder(
133
+ workflows=[
134
+ CustomTextWorkflow,
135
+ CustomCollocationsWorkflow
136
+ ]
137
+ )
138
+
139
+ # Pass as an argument the input required by the class in the first workflow. In this example, the first class is TextProcessor
140
+ result = CustomPipeline(text=text)
141
+
142
+ # Output: Counter({'quick': 1, 'brown': 1, 'jumps': 1, 'over': 1})
143
+ ```
144
+
145
+ ### Hooks
146
+
147
+ You can add hooks (functions with custom logic) to a pipeline. You must only follow two rules:
148
+
149
+ 1. It must only accept one input and return one output
150
+ 2. The type of its input must be the same as the one returned from the previous workflow. Also, the type it returns must match the input of the next workflow.
151
+
152
+ Following these rules you can insert your custom hooks this way:
153
+
154
+ ```python
155
+ from dstk import TextProcessor, Collocations, WorkflowBuilder, PipelineBuilder
156
+
157
+ text = "The quick brown fox jumps over the lazy dog while the sun sets behind the hills."
158
+
159
+ CustomTextWorkflow = WorkflowBuilder(
160
+ work_class=TextProcessor,
161
+ method_representation={
162
+ "set_model": {"model": model},
163
+ "get_tokens": {},
164
+ "remove_stop_words": {},
165
+ "get_text": {}
166
+ }
167
+ )
168
+
169
+ CustomCollocationsWorkflow = WorkflowBuilder(
170
+ work_class=Collocations,
171
+ method_representation={
172
+ "extract_ngrams": {"target_word": "fox_hook", "window_size": [2, 2]},
173
+ "count_collocates": {}
174
+ }
175
+ )
176
+
177
+ def custom_hook(tokens):
178
+ return [token + "_hook" for token in tokens]
179
+
180
+ CustomPipeline = PipelineBuilder(
181
+ workflows=[
182
+ CustomTextWorkflow,
183
+ custom_hook,
184
+ CustomCollocationsWorkflow
185
+ ]
186
+ )
187
+
188
+ # Pass as an argument the input required by the class in the first workflow. In this example, the first class is TextProcessor
189
+ result = CustomPipeline(text=text)
190
+
191
+ # Output: Counter({'quick_hook': 1, 'brown_hook': 1, 'jumps_hook': 1, 'over_hook': 1})
192
+ ```
193
+
194
+
195
+ # Algorithms
196
+
197
+ This library groups its methods by the common tasks commonly followed while doing distributional semantics:
198
+
199
+ ## Text pre-processing:
200
+
201
+ ### Class: TextProcessor
202
+
203
+ The available methods, grouped by stages, are the following:
204
+
205
+ **Stage: start**
206
+
207
+ - *set_model*: Takes a text and analyzes it using a language model.
208
+
209
+ **Stage: model**
210
+
211
+ - *get_tokens*: Returns a list of spaCy tokens from a Doc object.
212
+ - *get_sentences*: Returns a list containing sentences as strings or as spaCy Span objects.
213
+
214
+ **Stage: token_manipulation**
215
+
216
+ - *remove_stop_words*: Filters tokens, returning only alphanumeric tokens that are not stop words.
217
+ - *raw_tokenizer*: Tokenizes a text including punctuation and stop words.
218
+ - *alphanumeric_raw_tokenizer*: Tokenizes a text including only alphanumeric characters and stop words.
219
+ - *filter_by_pos*: Returns a list of spaCy tokens filtered by a spacific part-of-speech tag.
220
+ - *pos_tagger*: Returns a list of (Token, POS) tuples, pairing each token with its part-of-speech tag.
221
+ - *get_text*: Returns the text content from a list of spaCy tokens, Span objects or list of spaCy tokens.
222
+
223
+ **Stage: text_processing**
224
+
225
+ - *to_lower*: Returns a list of lower cased words.
226
+ - *corpus_by_context_window*: Splits the tokens into groups of window_size consecutive words and joins each group into a string.
227
+ - *get_vocabulary*: Returns the vocabulary a text.
228
+ - *join*: Joins a list of strings into a single string text.
229
+ - *save_to_file*: Saves a list of strings or (Token, POS) tuples in the specified path.
230
+
231
+ ## Find collocations:
232
+
233
+ ### Class: Collocations
234
+
235
+ The available methods, grouped by stages, are the following:
236
+
237
+ **Stage: start**
238
+
239
+ - *extract_ngrams*: Extracts both the context words of the target collocation, returned as tuples whose lenght corresponds to the specified window_size, and the collocations of the target word, in either directed or undirected manner.
240
+
241
+ **Stage: collocates**
242
+
243
+ - *count_collocates*: Counts the collocates of the target word.
244
+
245
+ **Stage: count**
246
+
247
+ - *plot*: Plots the count of the collocates.
248
+
249
+ ## Build a matrix from a text corpus:
250
+
251
+ ### Class: TextMatrixBuilder
252
+
253
+ The available methods, grouped by stages, are the following:
254
+
255
+ **Stage: start**
256
+
257
+ - *create_dtm*: Creates Document Term Matrix (DTM).
258
+
259
+ **Stage: matrix_operations**
260
+
261
+ - *create_co_ocurrence_matrix*: Creates a Co-occurrence matrix.
262
+ - *to_dataframe*: Creates a dataframe from a matrix representation.
263
+
264
+ ## Weight the co-occurrence matrix:
265
+
266
+ ### Class: WeightMatrix
267
+
268
+ The available methods, grouped by stages, are the following:
269
+
270
+ **Stage: start**
271
+
272
+ - *pmi*: Weights a Co-occurrence matrix by PMI or PPMI.
273
+ - *tf_idf*: Weights a Co-occurrence matrix by Tf-idf.
274
+
275
+ ## Generate word embeddings from a co-occurrence matrix:
276
+
277
+ ### Class: CountModels
278
+
279
+ The available methods, grouped by stages, are the following:
280
+
281
+ **Stage: start**
282
+
283
+ - *scale_matrix*: Scales the input matrix to have zero mean and unit variance for each feature.
284
+
285
+ **Stage: embeddings**
286
+
287
+ - *svd_embeddings*: Generates word embeddings using truncated Single Value Descomposition (SVD).
288
+ - *pca_embeddings*: Generates word embeddings using Principal Component Analysis (PCA).
289
+ - *to_dataframe*: Creates a dataframe from a matrix representation.
290
+
291
+ ## Measure the distance between two words (after generating the word embeddings):
292
+
293
+ ### Class: GeometricDistance
294
+
295
+ The available methods, grouped by stages, are the following:
296
+
297
+ **Stage: start**
298
+
299
+ - *euclidean_distance*: Computes the Euclidean distance between the embeddings of two words.
300
+ - *manhattan_distance*: Computes the Manhattan distance between the embeddings of two words.
301
+ - *cos_similarity*: Computes the cosine similarity between the embeddings of two words.
302
+ - *nearest_neighbors*: Returns the top N most semantically similar words to a given target word, based on the specified distance or similarity metric.
303
+
304
+ ## Generate word embeddings using neural networks:
305
+
306
+ ### Class: PredictModels
307
+
308
+ The available methods, grouped by stages, are the following:
309
+
310
+ **Stage: start**
311
+
312
+ - *word2vec*: Creates word embeddings using the Word2Vec algorithm.
313
+ - *fastText*: Creates word embeddings using the FastText algorithm.
314
+ - *load_model*: Loads the trained embeddings in .model (Word2Vec) or .bin (FastText) format, depending on the algorithm used.
315
+
316
+ **Stage: predict_model**
317
+
318
+ - *save_model*: Saves the trained embeddings in .model (Word2Vec) or .bin (FastText) format, depending on the algorithm used. Can also be used in stage embeddings_operations.
319
+ - *nearest_neighbors*: Returns the top N most semantically similar words to a given target word. Can also be used in stage embeddings_operations.
320
+ - *cos_similarity*: Computes the cosine similarity between the embeddings of two words. Can also be used in stage embeddings_operations.
321
+ - *to_matrix*: Returns a matrix represenation of the word embeddings and their associated labels.
322
+
323
+ ## Plot the word embeddings:
324
+
325
+ ### Class: PlotEmbeddings
326
+
327
+ The available methods, grouped by stages, are the following:
328
+
329
+ **Stage: start**
330
+
331
+ - *elbow_analysis*: Generates an Elbow plot to help determine the optimal number of clusters for the word embeddings.
332
+ - *extract_silhouette_score*: Extracts and plots the Silhouette score to help determine the optimal number of clusters for the word embeddings.
333
+
334
+ **Stage: clusters**
335
+
336
+ - *plot_embeddings_2D*: Generates a 2D plot of the word embedddings.
337
+ - *plot_embeddings_3D*: Generates a 3D plot of the word embedddings.
338
+
339
+ ## Predefined pipelines
340
+
341
+ DSTK has some pipelines included that already cover most of the frequent tasks in distributional semantics:
342
+
343
+ - *StandardModel*: This pipeline generates word embeddings using the standard model as defined by (Lenci & Sahlgren 97). It preprocesses the text by removing stop words, lowering the words and segmenting the text using a context window. The co-occurrence matrix is weighted with PPMI and reduced with truncated SVD.
344
+ - *SGNSModel*: This pipeline generates word embeddings using Skip-Gram with Negative Sampling (SGNS) as defined by (Lenci & Sahlgren 162). It preprocesses the text by extracting the sentences, removing stop words and lowering them. The embeddings are extracted by using word2vec to do SGNS. Returns an instance of PredictModels.
345
+
346
+ ## Other tools:
347
+
348
+ You can also convert from MatrixRepresentation to a dataframe and viceversa using 'matrix_to_dataframe' and 'dataframe_to_matrix' from matrix_base.
349
+
350
+ # Contributing
351
+
352
+ I welcome contributions to improve this toolkit. If you have ideas or fixes, feel free to fork the repository and submit a pull request. Here are some ways you can help:
353
+
354
+ * Report bugs or issues.
355
+
356
+ * Suggest new features or algorithms to add.
357
+
358
+ # License
359
+
360
+ This project is licensed under the GPL-3 License - see the [LICENSE](https://gitlab.com/CesarACabrera/distributional-semantics-toolkit/-/blob/master/LICENSE?ref_type=heads) file for details.
@@ -0,0 +1,28 @@
1
+ dstk/__init__.py,sha256=X58NNxR1EZ3CCpPMM569T_O-DUNGNjhpe6UoSn7Rrn0,356
2
+ dstk/collocations.py,sha256=fn47xrVHW1cgROMSLRbH5M9YxdNvO35tOIKCrXbpJvc,4985
3
+ dstk/count_models.py,sha256=D4OINB2Z2jUTyN3scLuNGREV0jHnvcITST3l-kHV9Jo,5489
4
+ dstk/geometric_distance.py,sha256=ddlUFv1nl0OKRgNb9_6fFf7loCDC-CQMDffQ6k0lxvE,5107
5
+ dstk/matrix_base.py,sha256=zLTM05dYUu7M7WUTV8i7ZQdDkJZ65iddR3ksd4Gy4T4,4199
6
+ dstk/pipeline_tools.py,sha256=ve3MfzZMoff6EQrJGyrlLW6ZQHWHEcGJSC_5aMpangk,933
7
+ dstk/pipelines.py,sha256=Za7DuCHSaZBSxzJyMOWHgTH3jm0aYkW4lav-nIMNdZo,4530
8
+ dstk/plot_embeddings.py,sha256=gfjXJGulRbV3uNC45gqMQoJN4YmfP5VVlpKB6eV7PTU,11210
9
+ dstk/predict_models.py,sha256=Cfv3ykzN5-ZIaeuqea0fkjZUy-bFHaDafwWdriBAwe0,8290
10
+ dstk/text_matrix_builder.py,sha256=VBs6pDS-YLAw0cuwVRHjgYHRXCNoo449p2IDl1-dKso,3405
11
+ dstk/text_processor.py,sha256=tHstoWJIHyC0s0zs0B8PIN2_L9woN7GlkiBPVixYsX4,18433
12
+ dstk/weight_matrix.py,sha256=wPBZeNro2ceSQukFzov_6HRFtIklEC_tDVs1Ze-KKQM,2346
13
+ dstk/workflow_tools.py,sha256=62zH2N91V9OYT0P9Jk97ahTmevnjmGF1Yda2BoQtSrU,11574
14
+ dstk/lib_types/__init__.py,sha256=Ka4bfePHC9HWUTiACBdEsaU8Go2J-C1D7ixeMG89lm4,252
15
+ dstk/lib_types/dstk_types.py,sha256=KCQwKav65nAD4VV1kixcWyF-m32HOKjbG0JO0Z6Vjsg,1011
16
+ dstk/lib_types/fasttext_types.py,sha256=5LXE77kgCPJHRx0zXlLTs7wRIQOGZiz30Pq0trIXcBA,51
17
+ dstk/lib_types/gensim_types.py,sha256=tg3OASG_EWuqFQw_pKM4HNjRk1yrMnmlBqdKm-orxag,34
18
+ dstk/lib_types/matplotlib_types.py,sha256=FSP2c6ryTscbuES7w1ccTtcMS1g3k_m-zD7ZlLkfb6I,177
19
+ dstk/lib_types/nltk_types.py,sha256=s_UVeJWIEmh2tzvS3ttuRjWXo84quMjIrm4OQf3vms4,21
20
+ dstk/lib_types/numpy_types.py,sha256=zxgVrHcRJ-_NGO3LE1aba0d4JQDLYN26us5ljlhIq7E,64
21
+ dstk/lib_types/pandas_types.py,sha256=bR27h-xyZ3FccROIHxqYpVvqMNoi1bvIzpq25cf8kkg,43
22
+ dstk/lib_types/sklearn_types.py,sha256=W59yIEkZM_E_tW061x1bY-LpRC2aCzLgtYmXANNSN3Q,47
23
+ dstk/lib_types/spacy_types.py,sha256=hUiaw4AywSW8o42h5lp3t6a4yosG_GasdJX2RCKgW7o,125
24
+ dstklib-1.0.0.dist-info/LICENSE,sha256=LpSgNPBfwn5F4CVhnTbhpiX2f0YgRMzGWQ7Sphuuwuc,35139
25
+ dstklib-1.0.0.dist-info/METADATA,sha256=MQ9P-jmmX_ezDUR7sZeA8XzDr43kqExBQI0Oe2UHOhM,13099
26
+ dstklib-1.0.0.dist-info/WHEEL,sha256=VyG4dJCdJcxE1baiVBm9NET3Nj7Wne1lZZq7UFNxRpg,97
27
+ dstklib-1.0.0.dist-info/top_level.txt,sha256=b_MNmKso0-ra2M7snsy5fZBW-l9MItjrwMYBd-tiOYo,5
28
+ dstklib-1.0.0.dist-info/RECORD,,
@@ -0,0 +1,5 @@
1
+ Wheel-Version: 1.0
2
+ Generator: setuptools (75.1.1.post0)
3
+ Root-Is-Purelib: true
4
+ Tag: py3-none-any
5
+
@@ -0,0 +1 @@
1
+ dstk