spark-nlp 6.1.0__py2.py3-none-any.whl → 6.1.2__py2.py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Potentially problematic release.
This version of spark-nlp might be problematic. Click here for more details.
- {spark_nlp-6.1.0.dist-info → spark_nlp-6.1.2.dist-info}/METADATA +12 -11
- {spark_nlp-6.1.0.dist-info → spark_nlp-6.1.2.dist-info}/RECORD +14 -12
- sparknlp/__init__.py +1 -1
- sparknlp/annotator/embeddings/auto_gguf_embeddings.py +4 -12
- sparknlp/annotator/seq2seq/__init__.py +1 -0
- sparknlp/annotator/seq2seq/auto_gguf_model.py +11 -10
- sparknlp/annotator/seq2seq/auto_gguf_reranker.py +329 -0
- sparknlp/annotator/seq2seq/auto_gguf_vision_model.py +7 -9
- sparknlp/common/properties.py +25 -30
- sparknlp/internal/__init__.py +6 -1
- sparknlp/reader/reader2doc.py +25 -9
- sparknlp/reader/reader2table.py +163 -0
- {spark_nlp-6.1.0.dist-info → spark_nlp-6.1.2.dist-info}/WHEEL +0 -0
- {spark_nlp-6.1.0.dist-info → spark_nlp-6.1.2.dist-info}/top_level.txt +0 -0
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
Metadata-Version: 2.4
|
|
2
2
|
Name: spark-nlp
|
|
3
|
-
Version: 6.1.
|
|
3
|
+
Version: 6.1.2
|
|
4
4
|
Summary: John Snow Labs Spark NLP is a natural language processing library built on top of Apache Spark ML. It provides simple, performant & accurate NLP annotations for machine learning pipelines, that scale easily in a distributed environment.
|
|
5
5
|
Home-page: https://github.com/JohnSnowLabs/spark-nlp
|
|
6
6
|
Author: John Snow Labs
|
|
@@ -58,7 +58,7 @@ Dynamic: summary
|
|
|
58
58
|
|
|
59
59
|
Spark NLP is a state-of-the-art Natural Language Processing library built on top of Apache Spark. It provides **simple**, **performant** & **accurate** NLP annotations for machine learning pipelines that **scale** easily in a distributed environment.
|
|
60
60
|
|
|
61
|
-
Spark NLP comes with **
|
|
61
|
+
Spark NLP comes with **100000+** pretrained **pipelines** and **models** in more than **200+** languages.
|
|
62
62
|
It also offers tasks such as **Tokenization**, **Word Segmentation**, **Part-of-Speech Tagging**, Word and Sentence **Embeddings**, **Named Entity Recognition**, **Dependency Parsing**, **Spell Checking**, **Text Classification**, **Sentiment Analysis**, **Token Classification**, **Machine Translation** (+180 languages), **Summarization**, **Question Answering**, **Table Question Answering**, **Text Generation**, **Image Classification**, **Image to Text (captioning)**, **Automatic Speech Recognition**, **Zero-Shot Learning**, and many more [NLP tasks](#features).
|
|
63
63
|
|
|
64
64
|
**Spark NLP** is the only open-source NLP library in **production** that offers state-of-the-art transformers such as **BERT**, **CamemBERT**, **ALBERT**, **ELECTRA**, **XLNet**, **DistilBERT**, **RoBERTa**, **DeBERTa**, **XLM-RoBERTa**, **Longformer**, **ELMO**, **Universal Sentence Encoder**, **Llama-2**, **M2M100**, **BART**, **Instructor**, **E5**, **Google T5**, **MarianMT**, **OpenAI GPT2**, **Vision Transformers (ViT)**, **OpenAI Whisper**, **Llama**, **Mistral**, **Phi**, **Qwen2**, and many more not only to **Python** and **R**, but also to **JVM** ecosystem (**Java**, **Scala**, and **Kotlin**) at **scale** by extending **Apache Spark** natively.
|
|
@@ -102,7 +102,7 @@ $ java -version
|
|
|
102
102
|
$ conda create -n sparknlp python=3.7 -y
|
|
103
103
|
$ conda activate sparknlp
|
|
104
104
|
# spark-nlp by default is based on pyspark 3.x
|
|
105
|
-
$ pip install spark-nlp==6.1.
|
|
105
|
+
$ pip install spark-nlp==6.1.2 pyspark==3.3.1
|
|
106
106
|
```
|
|
107
107
|
|
|
108
108
|
In Python console or Jupyter `Python3` kernel:
|
|
@@ -168,11 +168,11 @@ For a quick example of using pipelines and models take a look at our official [d
|
|
|
168
168
|
|
|
169
169
|
### Apache Spark Support
|
|
170
170
|
|
|
171
|
-
Spark NLP *6.1.
|
|
171
|
+
Spark NLP *6.1.2* has been built on top of Apache Spark 3.4 while fully supports Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, 3.4.x, and 3.5.x
|
|
172
172
|
|
|
173
173
|
| Spark NLP | Apache Spark 3.5.x | Apache Spark 3.4.x | Apache Spark 3.3.x | Apache Spark 3.2.x | Apache Spark 3.1.x | Apache Spark 3.0.x | Apache Spark 2.4.x | Apache Spark 2.3.x |
|
|
174
174
|
|-----------|--------------------|--------------------|--------------------|--------------------|--------------------|--------------------|--------------------|--------------------|
|
|
175
|
-
| 6.
|
|
175
|
+
| 6.x.x and up | YES | YES | YES | YES | YES | YES | NO | NO |
|
|
176
176
|
| 5.5.x | YES | YES | YES | YES | YES | YES | NO | NO |
|
|
177
177
|
| 5.4.x | YES | YES | YES | YES | YES | YES | NO | NO |
|
|
178
178
|
| 5.3.x | YES | YES | YES | YES | YES | YES | NO | NO |
|
|
@@ -198,7 +198,7 @@ Find out more about 4.x `SparkNLP` versions in our official [documentation](http
|
|
|
198
198
|
|
|
199
199
|
### Databricks Support
|
|
200
200
|
|
|
201
|
-
Spark NLP 6.1.
|
|
201
|
+
Spark NLP 6.1.2 has been tested and is compatible with the following runtimes:
|
|
202
202
|
|
|
203
203
|
| **CPU** | **GPU** |
|
|
204
204
|
|--------------------|--------------------|
|
|
@@ -206,16 +206,17 @@ Spark NLP 6.1.0 has been tested and is compatible with the following runtimes:
|
|
|
206
206
|
| 14.2 / 14.2 ML | 14.2 ML & GPU |
|
|
207
207
|
| 14.3 / 14.3 ML | 14.3 ML & GPU |
|
|
208
208
|
| 15.0 / 15.0 ML | 15.0 ML & GPU |
|
|
209
|
-
| 15.1 / 15.
|
|
210
|
-
| 15.2 / 15.
|
|
211
|
-
| 15.3 / 15.
|
|
212
|
-
| 15.4 / 15.
|
|
209
|
+
| 15.1 / 15.1 ML | 15.1 ML & GPU |
|
|
210
|
+
| 15.2 / 15.2 ML | 15.2 ML & GPU |
|
|
211
|
+
| 15.3 / 15.3 ML | 15.3 ML & GPU |
|
|
212
|
+
| 15.4 / 15.4 ML | 15.4 ML & GPU |
|
|
213
|
+
| 16.4 / 16.4 ML | 16.4 ML & GPU |
|
|
213
214
|
|
|
214
215
|
We are compatible with older runtimes. For a full list check databricks support in our official [documentation](https://sparknlp.org/docs/en/install#databricks-support)
|
|
215
216
|
|
|
216
217
|
### EMR Support
|
|
217
218
|
|
|
218
|
-
Spark NLP 6.1.
|
|
219
|
+
Spark NLP 6.1.2 has been tested and is compatible with the following EMR releases:
|
|
219
220
|
|
|
220
221
|
| **EMR Release** |
|
|
221
222
|
|--------------------|
|
|
@@ -3,7 +3,7 @@ com/johnsnowlabs/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,
|
|
|
3
3
|
com/johnsnowlabs/ml/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
|
4
4
|
com/johnsnowlabs/ml/ai/__init__.py,sha256=YQiK2M7U4d8y5irPy_HB8ae0mSpqS9583MH44pnKJXc,295
|
|
5
5
|
com/johnsnowlabs/nlp/__init__.py,sha256=DPIVXtONO5xXyOk-HB0-sNiHAcco17NN13zPS_6Uw8c,294
|
|
6
|
-
sparknlp/__init__.py,sha256=
|
|
6
|
+
sparknlp/__init__.py,sha256=beylcD_JfS6wohhs_UyT6WE2IBwRT4_C75_xPyG1_BE,13814
|
|
7
7
|
sparknlp/annotation.py,sha256=I5zOxG5vV2RfPZfqN9enT1i4mo6oBcn3Lrzs37QiOiA,5635
|
|
8
8
|
sparknlp/annotation_audio.py,sha256=iRV_InSVhgvAwSRe9NTbUH9v6OGvTM-FPCpSAKVu0mE,1917
|
|
9
9
|
sparknlp/annotation_image.py,sha256=xhCe8Ko-77XqWVuuYHFrjKqF6zPd8Z-RY_rmZXNwCXU,2547
|
|
@@ -105,7 +105,7 @@ sparknlp/annotator/dependency/dependency_parser.py,sha256=SxyvHPp8Hs1Xnm5X1nLTMi
|
|
|
105
105
|
sparknlp/annotator/dependency/typed_dependency_parser.py,sha256=60vPdYkbFk9MPGegg3m9Uik9cMXpMZd8tBvXG39gNww,12456
|
|
106
106
|
sparknlp/annotator/embeddings/__init__.py,sha256=Aw1oaP5DI0OS6259c0TEZZ6j3VFSvYFEerah5a-udVw,2528
|
|
107
107
|
sparknlp/annotator/embeddings/albert_embeddings.py,sha256=6Rd1LIn8oFIpq_ALcJh-RUjPEO7Ht8wsHY6JHSFyMkw,9995
|
|
108
|
-
sparknlp/annotator/embeddings/auto_gguf_embeddings.py,sha256=
|
|
108
|
+
sparknlp/annotator/embeddings/auto_gguf_embeddings.py,sha256=TRAYbhGS4K8uSpsScvDr6uD3lYdxMpCUjwDMhV_74rM,19977
|
|
109
109
|
sparknlp/annotator/embeddings/bert_embeddings.py,sha256=HVUjkg56kBcpGZCo-fmPG5uatMDF3swW_lnbpy1SgSI,8463
|
|
110
110
|
sparknlp/annotator/embeddings/bert_sentence_embeddings.py,sha256=NQy9KuXT9aKsTpYCR5RAeoFWI2YqEGorbdYrf_0KKmw,9148
|
|
111
111
|
sparknlp/annotator/embeddings/bge_embeddings.py,sha256=ZGbxssjJFaSfbcgqAPV5hsu81SnC0obgCVNOoJkArDA,8105
|
|
@@ -167,9 +167,10 @@ sparknlp/annotator/sentence/sentence_detector_dl.py,sha256=-Osj9Bm9KyZRTAWkOsK9c
|
|
|
167
167
|
sparknlp/annotator/sentiment/__init__.py,sha256=Lq3vKaZS1YATLMg0VNXSVtkWL5q5G9taGBvdrvSwnfg,766
|
|
168
168
|
sparknlp/annotator/sentiment/sentiment_detector.py,sha256=m545NGU0Xzg_PO6_qIfpli1uZj7JQcyFgqe9R6wAPFI,8154
|
|
169
169
|
sparknlp/annotator/sentiment/vivekn_sentiment.py,sha256=4rpXWDgzU6ddnbrSCp9VdLb2epCc9oZ3c6XcqxEw8nk,9655
|
|
170
|
-
sparknlp/annotator/seq2seq/__init__.py,sha256=
|
|
171
|
-
sparknlp/annotator/seq2seq/auto_gguf_model.py,sha256=
|
|
172
|
-
sparknlp/annotator/seq2seq/
|
|
170
|
+
sparknlp/annotator/seq2seq/__init__.py,sha256=aDiph00Hyq7L8uDY0frtyuHtqFodBqTMbixx_nq4z1I,1841
|
|
171
|
+
sparknlp/annotator/seq2seq/auto_gguf_model.py,sha256=yhZQHMHfp88rQvLHTWyS-8imZrwqp-8RQQwnw6PmHfc,11749
|
|
172
|
+
sparknlp/annotator/seq2seq/auto_gguf_reranker.py,sha256=QpGpyO1_epWzMospTFrfVVLj2KZ_n3gbHN269vo9fbU,12667
|
|
173
|
+
sparknlp/annotator/seq2seq/auto_gguf_vision_model.py,sha256=swBek2026dW6BOX5O9P8Uq41X2GC71VGW0ADFeUIvs0,15299
|
|
173
174
|
sparknlp/annotator/seq2seq/bart_transformer.py,sha256=I1flM4yeCzEAKOdQllBC30XuedxVJ7ferkFhZ6gwEbE,18481
|
|
174
175
|
sparknlp/annotator/seq2seq/cohere_transformer.py,sha256=43LZBVazZMgJRCsN7HaYjVYfJ5hRMV95QZyxMtXq-m4,13496
|
|
175
176
|
sparknlp/annotator/seq2seq/cpm_transformer.py,sha256=0CnBFMlxMu0pD2QZMHyoGtIYgXqfUQm68vr6zEAa6Eg,13290
|
|
@@ -223,12 +224,12 @@ sparknlp/common/annotator_properties.py,sha256=7B1os7pBUfHo6b7IPQAXQ-nir0u3tQLzD
|
|
|
223
224
|
sparknlp/common/annotator_type.py,sha256=ash2Ip1IOOiJamPVyy_XQj8Ja_DRHm0b9Vj4Ni75oKM,1225
|
|
224
225
|
sparknlp/common/coverage_result.py,sha256=No4PSh1HSs3PyRI1zC47x65tWgfirqPI290icHQoXEI,823
|
|
225
226
|
sparknlp/common/match_strategy.py,sha256=kt1MUPqU1wCwk5qCdYk6jubHbU-5yfAYxb9jjAOrdnY,1678
|
|
226
|
-
sparknlp/common/properties.py,sha256=
|
|
227
|
+
sparknlp/common/properties.py,sha256=7eBxODxKmFQAgOtrxUH9ly4LugUlkNRVXNQcM60AUK4,53025
|
|
227
228
|
sparknlp/common/read_as.py,sha256=imxPGwV7jr4Li_acbo0OAHHRGCBbYv-akzEGaBWEfcY,1226
|
|
228
229
|
sparknlp/common/recursive_annotator_approach.py,sha256=vqugBw22cE3Ff7PIpRlnYFuOlchgL0nM26D8j-NdpqU,1449
|
|
229
230
|
sparknlp/common/storage.py,sha256=D91H3p8EIjNspjqAYu6ephRpCUtdcAir4_PrAbkIQWE,4842
|
|
230
231
|
sparknlp/common/utils.py,sha256=Yne6yYcwKxhOZC-U4qfYoDhWUP_6BIaAjI5X_P_df1E,1306
|
|
231
|
-
sparknlp/internal/__init__.py,sha256=
|
|
232
|
+
sparknlp/internal/__init__.py,sha256=m7Y7y-IPkB6aJuGUCM54eOueGOEt65C3ujAzN16hegQ,40995
|
|
232
233
|
sparknlp/internal/annotator_java_ml.py,sha256=UGPoThG0rGXUOXGSQnDzEDW81Mu1s5RPF29v7DFyE3c,1187
|
|
233
234
|
sparknlp/internal/annotator_transformer.py,sha256=fXmc2IWXGybqZpbEU9obmbdBYPc798y42zvSB4tqV9U,1448
|
|
234
235
|
sparknlp/internal/extended_java_wrapper.py,sha256=hwP0133-hDiDf5sBF-P3MtUsuuDj1PpQbtGZQIRwzfk,2240
|
|
@@ -247,7 +248,8 @@ sparknlp/pretrained/utils.py,sha256=T1MrvW_DaWk_jcOjVLOea0NMFE9w8fe0ZT_5urZ_nEY,
|
|
|
247
248
|
sparknlp/reader/__init__.py,sha256=-Toj3AIBki-zXPpV8ezFTI2LX1yP_rK2bhpoa8nBkTw,685
|
|
248
249
|
sparknlp/reader/enums.py,sha256=MNGug9oJ1BBLM1Pbske13kAabalDzHa2kucF5xzFpHs,770
|
|
249
250
|
sparknlp/reader/pdf_to_text.py,sha256=eWw-cwjosmcSZ9eHso0F5QQoeGBBnwsOhzhCXXvMjZA,7169
|
|
250
|
-
sparknlp/reader/reader2doc.py,sha256=
|
|
251
|
+
sparknlp/reader/reader2doc.py,sha256=LRqfaL9nidhlPkJIwTJo7SnGYmNNfOqwEdrsWYGEdnI,7146
|
|
252
|
+
sparknlp/reader/reader2table.py,sha256=GC6Yz0gQ83S6XKOi329TUNQuAvLrBxysqDkDRZPvcYA,4759
|
|
251
253
|
sparknlp/reader/sparknlp_reader.py,sha256=MJs8v_ECYaV1SOabI1L_2MkVYEDVImtwgbYypO7DJSY,20623
|
|
252
254
|
sparknlp/training/__init__.py,sha256=qREi9u-5Vc2VjpL6-XZsyvu5jSEIdIhowW7_kKaqMqo,852
|
|
253
255
|
sparknlp/training/conll.py,sha256=wKBiSTrjc6mjsl7Nyt6B8f4yXsDJkZb-sn8iOjix9cE,6961
|
|
@@ -279,7 +281,7 @@ sparknlp/training/_tf_graph_builders_1x/ner_dl/dataset_encoder.py,sha256=R4yHFN3
|
|
|
279
281
|
sparknlp/training/_tf_graph_builders_1x/ner_dl/ner_model.py,sha256=EoCSdcIjqQ3wv13MAuuWrKV8wyVBP0SbOEW41omHlR0,23189
|
|
280
282
|
sparknlp/training/_tf_graph_builders_1x/ner_dl/ner_model_saver.py,sha256=k5CQ7gKV6HZbZMB8cKLUJuZxoZWlP_DFWdZ--aIDwsc,2356
|
|
281
283
|
sparknlp/training/_tf_graph_builders_1x/ner_dl/sentence_grouper.py,sha256=pAxjWhjazSX8Vg0MFqJiuRVw1IbnQNSs-8Xp26L4nko,870
|
|
282
|
-
spark_nlp-6.1.
|
|
283
|
-
spark_nlp-6.1.
|
|
284
|
-
spark_nlp-6.1.
|
|
285
|
-
spark_nlp-6.1.
|
|
284
|
+
spark_nlp-6.1.2.dist-info/METADATA,sha256=l6za09CF7uliVRGYEFRi02vualYyzRt_kPuuVa8MnWg,19774
|
|
285
|
+
spark_nlp-6.1.2.dist-info/WHEEL,sha256=JNWh1Fm1UdwIQV075glCn4MVuCRs0sotJIq-J6rbxCU,109
|
|
286
|
+
spark_nlp-6.1.2.dist-info/top_level.txt,sha256=uuytur4pyMRw2H_txNY2ZkaucZHUs22QF8-R03ch_-E,13
|
|
287
|
+
spark_nlp-6.1.2.dist-info/RECORD,,
|
sparknlp/__init__.py
CHANGED
|
@@ -12,8 +12,6 @@
|
|
|
12
12
|
# See the License for the specific language governing permissions and
|
|
13
13
|
# limitations under the License.
|
|
14
14
|
"""Contains classes for the AutoGGUFEmbeddings."""
|
|
15
|
-
from typing import List
|
|
16
|
-
|
|
17
15
|
from sparknlp.common import *
|
|
18
16
|
|
|
19
17
|
|
|
@@ -32,7 +30,7 @@ class AutoGGUFEmbeddings(AnnotatorModel, HasBatchedAnnotate):
|
|
|
32
30
|
... .setInputCols(["document"]) \\
|
|
33
31
|
... .setOutputCol("embeddings")
|
|
34
32
|
|
|
35
|
-
The default model is ``"
|
|
33
|
+
The default model is ``"Qwen3_Embedding_0.6B_Q8_0_gguf"``, if no name is provided.
|
|
36
34
|
|
|
37
35
|
For extended examples of usage, see the
|
|
38
36
|
`AutoGGUFEmbeddingsTest <https://github.com/JohnSnowLabs/spark-nlp/tree/master/src/test/scala/com/johnsnowlabs/nlp/embeddings/AutoGGUFEmbeddingsTest.scala>`__
|
|
@@ -313,12 +311,6 @@ class AutoGGUFEmbeddings(AnnotatorModel, HasBatchedAnnotate):
|
|
|
313
311
|
"Set the pooling type for embeddings, use model default if unspecified",
|
|
314
312
|
typeConverter=TypeConverters.toString,
|
|
315
313
|
)
|
|
316
|
-
embedding = Param(
|
|
317
|
-
Params._dummy(),
|
|
318
|
-
"embedding",
|
|
319
|
-
"Whether to load model with embedding support",
|
|
320
|
-
typeConverter=TypeConverters.toBoolean,
|
|
321
|
-
)
|
|
322
314
|
flashAttention = Param(
|
|
323
315
|
Params._dummy(),
|
|
324
316
|
"flashAttention",
|
|
@@ -489,10 +481,10 @@ class AutoGGUFEmbeddings(AnnotatorModel, HasBatchedAnnotate):
|
|
|
489
481
|
classname=classname, java_model=java_model
|
|
490
482
|
)
|
|
491
483
|
self._setDefault(
|
|
492
|
-
embedding=True,
|
|
493
484
|
nCtx=4096,
|
|
494
485
|
nBatch=512,
|
|
495
486
|
poolingType="MEAN",
|
|
487
|
+
nGpuLayers=99,
|
|
496
488
|
)
|
|
497
489
|
|
|
498
490
|
@staticmethod
|
|
@@ -517,13 +509,13 @@ class AutoGGUFEmbeddings(AnnotatorModel, HasBatchedAnnotate):
|
|
|
517
509
|
return AutoGGUFEmbeddings(java_model=jModel)
|
|
518
510
|
|
|
519
511
|
@staticmethod
|
|
520
|
-
def pretrained(name="
|
|
512
|
+
def pretrained(name="Qwen3_Embedding_0.6B_Q8_0_gguf", lang="en", remote_loc=None):
|
|
521
513
|
"""Downloads and loads a pretrained model.
|
|
522
514
|
|
|
523
515
|
Parameters
|
|
524
516
|
----------
|
|
525
517
|
name : str, optional
|
|
526
|
-
Name of the pretrained model, by default "
|
|
518
|
+
Name of the pretrained model, by default "Qwen3_Embedding_0.6B_Q8_0_gguf"
|
|
527
519
|
lang : str, optional
|
|
528
520
|
Language of the pretrained model, by default "en"
|
|
529
521
|
remote_loc : str, optional
|
|
@@ -32,3 +32,4 @@ from sparknlp.annotator.seq2seq.llama3_transformer import *
|
|
|
32
32
|
from sparknlp.annotator.seq2seq.cohere_transformer import *
|
|
33
33
|
from sparknlp.annotator.seq2seq.olmo_transformer import *
|
|
34
34
|
from sparknlp.annotator.seq2seq.phi4_transformer import *
|
|
35
|
+
from sparknlp.annotator.seq2seq.auto_gguf_reranker import *
|
|
@@ -37,7 +37,11 @@ class AutoGGUFModel(AnnotatorModel, HasBatchedAnnotate, HasLlamaCppProperties):
|
|
|
37
37
|
... .setInputCols(["document"]) \\
|
|
38
38
|
... .setOutputCol("completions")
|
|
39
39
|
|
|
40
|
-
The default model is ``"
|
|
40
|
+
The default model is ``"Phi_4_mini_instruct_Q4_K_M_gguf"``, if no name is provided.
|
|
41
|
+
|
|
42
|
+
AutoGGUFModel is also able to load pretrained models from AutoGGUFVisionModel. Just
|
|
43
|
+
specify the same name for the pretrained method, and it will load the text-part of the
|
|
44
|
+
multimodal model automatically.
|
|
41
45
|
|
|
42
46
|
For extended examples of usage, see the
|
|
43
47
|
`AutoGGUFModelTest <https://github.com/JohnSnowLabs/spark-nlp/tree/master/src/test/scala/com/johnsnowlabs/nlp/annotators/seq2seq/AutoGGUFModelTest.scala>`__
|
|
@@ -120,8 +124,6 @@ class AutoGGUFModel(AnnotatorModel, HasBatchedAnnotate, HasLlamaCppProperties):
|
|
|
120
124
|
Set path to static lookup cache to use for lookup decoding (not updated by generation)
|
|
121
125
|
lookupCacheDynamicFilePath
|
|
122
126
|
Set path to dynamic lookup cache to use for lookup decoding (updated by generation)
|
|
123
|
-
embedding
|
|
124
|
-
Whether to load model with embedding support
|
|
125
127
|
flashAttention
|
|
126
128
|
Whether to enable Flash Attention
|
|
127
129
|
inputPrefixBos
|
|
@@ -252,20 +254,19 @@ class AutoGGUFModel(AnnotatorModel, HasBatchedAnnotate, HasLlamaCppProperties):
|
|
|
252
254
|
useChatTemplate=True,
|
|
253
255
|
nCtx=4096,
|
|
254
256
|
nBatch=512,
|
|
255
|
-
embedding=False,
|
|
256
257
|
nPredict=100,
|
|
257
258
|
nGpuLayers=99,
|
|
258
259
|
systemPrompt="You are a helpful assistant."
|
|
259
260
|
)
|
|
260
261
|
|
|
261
262
|
@staticmethod
|
|
262
|
-
def loadSavedModel(
|
|
263
|
+
def loadSavedModel(path, spark_session):
|
|
263
264
|
"""Loads a locally saved model.
|
|
264
265
|
|
|
265
266
|
Parameters
|
|
266
267
|
----------
|
|
267
|
-
|
|
268
|
-
|
|
268
|
+
path : str
|
|
269
|
+
Path to the gguf model
|
|
269
270
|
spark_session : pyspark.sql.SparkSession
|
|
270
271
|
The current SparkSession
|
|
271
272
|
|
|
@@ -275,17 +276,17 @@ class AutoGGUFModel(AnnotatorModel, HasBatchedAnnotate, HasLlamaCppProperties):
|
|
|
275
276
|
The restored model
|
|
276
277
|
"""
|
|
277
278
|
from sparknlp.internal import _AutoGGUFLoader
|
|
278
|
-
jModel = _AutoGGUFLoader(
|
|
279
|
+
jModel = _AutoGGUFLoader(path, spark_session._jsparkSession)._java_obj
|
|
279
280
|
return AutoGGUFModel(java_model=jModel)
|
|
280
281
|
|
|
281
282
|
@staticmethod
|
|
282
|
-
def pretrained(name="
|
|
283
|
+
def pretrained(name="Phi_4_mini_instruct_Q4_K_M_gguf", lang="en", remote_loc=None):
|
|
283
284
|
"""Downloads and loads a pretrained model.
|
|
284
285
|
|
|
285
286
|
Parameters
|
|
286
287
|
----------
|
|
287
288
|
name : str, optional
|
|
288
|
-
Name of the pretrained model, by default "
|
|
289
|
+
Name of the pretrained model, by default "Phi_4_mini_instruct_Q4_K_M_gguf"
|
|
289
290
|
lang : str, optional
|
|
290
291
|
Language of the pretrained model, by default "en"
|
|
291
292
|
remote_loc : str, optional
|
|
@@ -0,0 +1,329 @@
|
|
|
1
|
+
# Copyright 2017-2023 John Snow Labs
|
|
2
|
+
#
|
|
3
|
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
4
|
+
# you may not use this file except in compliance with the License.
|
|
5
|
+
# You may obtain a copy of the License at
|
|
6
|
+
#
|
|
7
|
+
# http://www.apache.org/licenses/LICENSE-2.0
|
|
8
|
+
#
|
|
9
|
+
# Unless required by applicable law or agreed to in writing, software
|
|
10
|
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
11
|
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
12
|
+
# See the License for the specific language governing permissions and
|
|
13
|
+
# limitations under the License.
|
|
14
|
+
"""Contains classes for the AutoGGUFReranker."""
|
|
15
|
+
from typing import List, Dict
|
|
16
|
+
|
|
17
|
+
from sparknlp.common import *
|
|
18
|
+
|
|
19
|
+
|
|
20
|
+
class AutoGGUFReranker(AnnotatorModel, HasBatchedAnnotate, HasLlamaCppProperties):
|
|
21
|
+
"""
|
|
22
|
+
Annotator that uses the llama.cpp library to rerank text documents based on their relevance
|
|
23
|
+
to a given query using GGUF-format reranking models.
|
|
24
|
+
|
|
25
|
+
This annotator is specifically designed for text reranking tasks, where multiple documents
|
|
26
|
+
or text passages are ranked according to their relevance to a query. It uses specialized
|
|
27
|
+
reranking models in GGUF format that output relevance scores for each input document.
|
|
28
|
+
|
|
29
|
+
The reranker takes a query (set via :meth:`.setQuery`) and a list of documents, then returns the
|
|
30
|
+
same documents with added metadata containing relevance scores. The documents are processed
|
|
31
|
+
in batches and each receives a ``relevance_score`` in its metadata indicating how relevant
|
|
32
|
+
it is to the provided query.
|
|
33
|
+
|
|
34
|
+
For settable parameters, and their explanations, see the parameters of this class and refer to
|
|
35
|
+
the llama.cpp documentation of
|
|
36
|
+
`server.cpp <https://github.com/ggerganov/llama.cpp/tree/7d5e8777ae1d21af99d4f95be10db4870720da91/examples/server>`__
|
|
37
|
+
for more information.
|
|
38
|
+
|
|
39
|
+
If the parameters are not set, the annotator will default to use the parameters provided by
|
|
40
|
+
the model.
|
|
41
|
+
|
|
42
|
+
Pretrained models can be loaded with :meth:`.pretrained` of the companion
|
|
43
|
+
object:
|
|
44
|
+
|
|
45
|
+
>>> reranker = AutoGGUFReranker.pretrained() \\
|
|
46
|
+
... .setInputCols(["document"]) \\
|
|
47
|
+
... .setOutputCol("reranked_documents") \\
|
|
48
|
+
... .setQuery("A man is eating pasta.")
|
|
49
|
+
|
|
50
|
+
The default model is ``"bge-reranker-v2-m3-Q4_K_M"``, if no name is provided.
|
|
51
|
+
|
|
52
|
+
For extended examples of usage, see the
|
|
53
|
+
`AutoGGUFRerankerTest <https://github.com/JohnSnowLabs/spark-nlp/tree/master/src/test/scala/com/johnsnowlabs/nlp/annotators/seq2seq/AutoGGUFRerankerTest.scala>`__
|
|
54
|
+
and the
|
|
55
|
+
`example notebook <https://github.com/JohnSnowLabs/spark-nlp/tree/master/examples/python/llama.cpp/llama.cpp_in_Spark_NLP_AutoGGUFReranker.ipynb>`__.
|
|
56
|
+
|
|
57
|
+
For available pretrained models please see the `Models Hub <https://sparknlp.org/models>`__.
|
|
58
|
+
|
|
59
|
+
====================== ======================
|
|
60
|
+
Input Annotation types Output Annotation type
|
|
61
|
+
====================== ======================
|
|
62
|
+
``DOCUMENT`` ``DOCUMENT``
|
|
63
|
+
====================== ======================
|
|
64
|
+
|
|
65
|
+
Parameters
|
|
66
|
+
----------
|
|
67
|
+
query
|
|
68
|
+
The query to be used for reranking. If not set, the input text will be used as the query.
|
|
69
|
+
nThreads
|
|
70
|
+
Set the number of threads to use during generation
|
|
71
|
+
nThreadsDraft
|
|
72
|
+
Set the number of threads to use during draft generation
|
|
73
|
+
nThreadsBatch
|
|
74
|
+
Set the number of threads to use during batch and prompt processing
|
|
75
|
+
nThreadsBatchDraft
|
|
76
|
+
Set the number of threads to use during batch and prompt processing
|
|
77
|
+
nCtx
|
|
78
|
+
Set the size of the prompt context
|
|
79
|
+
nBatch
|
|
80
|
+
Set the logical batch size for prompt processing (must be >=32 to use BLAS)
|
|
81
|
+
nUbatch
|
|
82
|
+
Set the physical batch size for prompt processing (must be >=32 to use BLAS)
|
|
83
|
+
nGpuLayers
|
|
84
|
+
Set the number of layers to store in VRAM (-1 - use default)
|
|
85
|
+
nGpuLayersDraft
|
|
86
|
+
Set the number of layers to store in VRAM for the draft model (-1 - use default)
|
|
87
|
+
gpuSplitMode
|
|
88
|
+
Set how to split the model across GPUs
|
|
89
|
+
mainGpu
|
|
90
|
+
Set the main GPU that is used for scratch and small tensors.
|
|
91
|
+
tensorSplit
|
|
92
|
+
Set how split tensors should be distributed across GPUs
|
|
93
|
+
grpAttnN
|
|
94
|
+
Set the group-attention factor
|
|
95
|
+
grpAttnW
|
|
96
|
+
Set the group-attention width
|
|
97
|
+
ropeFreqBase
|
|
98
|
+
Set the RoPE base frequency, used by NTK-aware scaling
|
|
99
|
+
ropeFreqScale
|
|
100
|
+
Set the RoPE frequency scaling factor, expands context by a factor of 1/N
|
|
101
|
+
yarnExtFactor
|
|
102
|
+
Set the YaRN extrapolation mix factor
|
|
103
|
+
yarnAttnFactor
|
|
104
|
+
Set the YaRN scale sqrt(t) or attention magnitude
|
|
105
|
+
yarnBetaFast
|
|
106
|
+
Set the YaRN low correction dim or beta
|
|
107
|
+
yarnBetaSlow
|
|
108
|
+
Set the YaRN high correction dim or alpha
|
|
109
|
+
yarnOrigCtx
|
|
110
|
+
Set the YaRN original context size of model
|
|
111
|
+
defragmentationThreshold
|
|
112
|
+
Set the KV cache defragmentation threshold
|
|
113
|
+
numaStrategy
|
|
114
|
+
Set optimization strategies that help on some NUMA systems (if available)
|
|
115
|
+
ropeScalingType
|
|
116
|
+
Set the RoPE frequency scaling method, defaults to linear unless specified by the model
|
|
117
|
+
poolingType
|
|
118
|
+
Set the pooling type for embeddings, use model default if unspecified
|
|
119
|
+
modelDraft
|
|
120
|
+
Set the draft model for speculative decoding
|
|
121
|
+
modelAlias
|
|
122
|
+
Set a model alias
|
|
123
|
+
lookupCacheStaticFilePath
|
|
124
|
+
Set path to static lookup cache to use for lookup decoding (not updated by generation)
|
|
125
|
+
lookupCacheDynamicFilePath
|
|
126
|
+
Set path to dynamic lookup cache to use for lookup decoding (updated by generation)
|
|
127
|
+
flashAttention
|
|
128
|
+
Whether to enable Flash Attention
|
|
129
|
+
inputPrefixBos
|
|
130
|
+
Whether to add prefix BOS to user inputs, preceding the `--in-prefix` string
|
|
131
|
+
useMmap
|
|
132
|
+
Whether to use memory-map model (faster load but may increase pageouts if not using mlock)
|
|
133
|
+
useMlock
|
|
134
|
+
Whether to force the system to keep model in RAM rather than swapping or compressing
|
|
135
|
+
noKvOffload
|
|
136
|
+
Whether to disable KV offload
|
|
137
|
+
systemPrompt
|
|
138
|
+
Set a system prompt to use
|
|
139
|
+
chatTemplate
|
|
140
|
+
The chat template to use
|
|
141
|
+
inputPrefix
|
|
142
|
+
Set the prompt to start generation with
|
|
143
|
+
inputSuffix
|
|
144
|
+
Set a suffix for infilling
|
|
145
|
+
cachePrompt
|
|
146
|
+
Whether to remember the prompt to avoid reprocessing it
|
|
147
|
+
nPredict
|
|
148
|
+
Set the number of tokens to predict
|
|
149
|
+
topK
|
|
150
|
+
Set top-k sampling
|
|
151
|
+
topP
|
|
152
|
+
Set top-p sampling
|
|
153
|
+
minP
|
|
154
|
+
Set min-p sampling
|
|
155
|
+
tfsZ
|
|
156
|
+
Set tail free sampling, parameter z
|
|
157
|
+
typicalP
|
|
158
|
+
Set locally typical sampling, parameter p
|
|
159
|
+
temperature
|
|
160
|
+
Set the temperature
|
|
161
|
+
dynatempRange
|
|
162
|
+
Set the dynamic temperature range
|
|
163
|
+
dynatempExponent
|
|
164
|
+
Set the dynamic temperature exponent
|
|
165
|
+
repeatLastN
|
|
166
|
+
Set the last n tokens to consider for penalties
|
|
167
|
+
repeatPenalty
|
|
168
|
+
Set the penalty of repeated sequences of tokens
|
|
169
|
+
frequencyPenalty
|
|
170
|
+
Set the repetition alpha frequency penalty
|
|
171
|
+
presencePenalty
|
|
172
|
+
Set the repetition alpha presence penalty
|
|
173
|
+
miroStat
|
|
174
|
+
Set MiroStat sampling strategies.
|
|
175
|
+
mirostatTau
|
|
176
|
+
Set the MiroStat target entropy, parameter tau
|
|
177
|
+
mirostatEta
|
|
178
|
+
Set the MiroStat learning rate, parameter eta
|
|
179
|
+
penalizeNl
|
|
180
|
+
Whether to penalize newline tokens
|
|
181
|
+
nKeep
|
|
182
|
+
Set the number of tokens to keep from the initial prompt
|
|
183
|
+
seed
|
|
184
|
+
Set the RNG seed
|
|
185
|
+
nProbs
|
|
186
|
+
Set the amount top tokens probabilities to output if greater than 0.
|
|
187
|
+
minKeep
|
|
188
|
+
Set the amount of tokens the samplers should return at least (0 = disabled)
|
|
189
|
+
grammar
|
|
190
|
+
Set BNF-like grammar to constrain generations
|
|
191
|
+
penaltyPrompt
|
|
192
|
+
Override which part of the prompt is penalized for repetition.
|
|
193
|
+
ignoreEos
|
|
194
|
+
Set whether to ignore end of stream token and continue generating (implies --logit-bias 2-inf)
|
|
195
|
+
disableTokenIds
|
|
196
|
+
Set the token ids to disable in the completion
|
|
197
|
+
stopStrings
|
|
198
|
+
Set strings upon seeing which token generation is stopped
|
|
199
|
+
samplers
|
|
200
|
+
Set which samplers to use for token generation in the given order
|
|
201
|
+
useChatTemplate
|
|
202
|
+
Set whether or not generate should apply a chat template
|
|
203
|
+
|
|
204
|
+
Notes
|
|
205
|
+
-----
|
|
206
|
+
This annotator is designed for reranking tasks and requires setting a query using ``setQuery``.
|
|
207
|
+
The query represents the search intent against which documents will be ranked. Each input
|
|
208
|
+
document receives a relevance score in the output metadata.
|
|
209
|
+
|
|
210
|
+
To use GPU inference with this annotator, make sure to use the Spark NLP GPU package and set
|
|
211
|
+
the number of GPU layers with the `setNGpuLayers` method.
|
|
212
|
+
|
|
213
|
+
When using larger models, we recommend adjusting GPU usage with `setNCtx` and `setNGpuLayers`
|
|
214
|
+
according to your hardware to avoid out-of-memory errors.
|
|
215
|
+
|
|
216
|
+
Examples
|
|
217
|
+
--------
|
|
218
|
+
>>> import sparknlp
|
|
219
|
+
>>> from sparknlp.base import *
|
|
220
|
+
>>> from sparknlp.annotator import *
|
|
221
|
+
>>> from pyspark.ml import Pipeline
|
|
222
|
+
>>> document = DocumentAssembler() \\
|
|
223
|
+
... .setInputCol("text") \\
|
|
224
|
+
... .setOutputCol("document")
|
|
225
|
+
>>> reranker = AutoGGUFReranker.pretrained("bge-reranker-v2-m3-Q4_K_M") \\
|
|
226
|
+
... .setInputCols(["document"]) \\
|
|
227
|
+
... .setOutputCol("reranked_documents") \\
|
|
228
|
+
... .setBatchSize(4) \\
|
|
229
|
+
... .setQuery("A man is eating pasta.")
|
|
230
|
+
>>> pipeline = Pipeline().setStages([document, reranker])
|
|
231
|
+
>>> data = spark.createDataFrame([
|
|
232
|
+
... ["A man is eating food."],
|
|
233
|
+
... ["A man is eating a piece of bread."],
|
|
234
|
+
... ["The girl is carrying a baby."],
|
|
235
|
+
... ["A man is riding a horse."]
|
|
236
|
+
... ]).toDF("text")
|
|
237
|
+
>>> result = pipeline.fit(data).transform(data)
|
|
238
|
+
>>> result.select("reranked_documents").show(truncate = False)
|
|
239
|
+
# Each document will have a relevance_score in metadata showing how relevant it is to the query
|
|
240
|
+
"""
|
|
241
|
+
|
|
242
|
+
name = "AutoGGUFReranker"
|
|
243
|
+
inputAnnotatorTypes = [AnnotatorType.DOCUMENT]
|
|
244
|
+
outputAnnotatorType = AnnotatorType.DOCUMENT
|
|
245
|
+
|
|
246
|
+
query = Param(Params._dummy(), "query",
|
|
247
|
+
"The query to be used for reranking. If not set, the input text will be used as the query.",
|
|
248
|
+
typeConverter=TypeConverters.toString)
|
|
249
|
+
@keyword_only
|
|
250
|
+
def __init__(self, classname="com.johnsnowlabs.nlp.annotators.seq2seq.AutoGGUFReranker", java_model=None):
|
|
251
|
+
super(AutoGGUFReranker, self).__init__(
|
|
252
|
+
classname=classname,
|
|
253
|
+
java_model=java_model
|
|
254
|
+
)
|
|
255
|
+
self._setDefault(
|
|
256
|
+
useChatTemplate=True,
|
|
257
|
+
nCtx=4096,
|
|
258
|
+
nBatch=512,
|
|
259
|
+
nGpuLayers=99,
|
|
260
|
+
systemPrompt="You are a helpful assistant.",
|
|
261
|
+
query=""
|
|
262
|
+
)
|
|
263
|
+
|
|
264
|
+
def setQuery(self, value: str):
|
|
265
|
+
"""Set the query to be used for reranking.
|
|
266
|
+
|
|
267
|
+
Parameters
|
|
268
|
+
----------
|
|
269
|
+
value : str
|
|
270
|
+
The query text that documents will be ranked against.
|
|
271
|
+
|
|
272
|
+
Returns
|
|
273
|
+
-------
|
|
274
|
+
AutoGGUFReranker
|
|
275
|
+
This instance for method chaining.
|
|
276
|
+
"""
|
|
277
|
+
return self._set(query=value)
|
|
278
|
+
|
|
279
|
+
def getQuery(self):
|
|
280
|
+
"""Get the current query used for reranking.
|
|
281
|
+
|
|
282
|
+
Returns
|
|
283
|
+
-------
|
|
284
|
+
str
|
|
285
|
+
The current query string.
|
|
286
|
+
"""
|
|
287
|
+
return self._call_java("getQuery")
|
|
288
|
+
|
|
289
|
+
@staticmethod
|
|
290
|
+
def loadSavedModel(folder, spark_session):
|
|
291
|
+
"""Loads a locally saved model.
|
|
292
|
+
|
|
293
|
+
Parameters
|
|
294
|
+
----------
|
|
295
|
+
folder : str
|
|
296
|
+
Folder of the saved model
|
|
297
|
+
spark_session : pyspark.sql.SparkSession
|
|
298
|
+
The current SparkSession
|
|
299
|
+
|
|
300
|
+
Returns
|
|
301
|
+
-------
|
|
302
|
+
AutoGGUFReranker
|
|
303
|
+
The restored model
|
|
304
|
+
"""
|
|
305
|
+
from sparknlp.internal import _AutoGGUFRerankerLoader
|
|
306
|
+
jModel = _AutoGGUFRerankerLoader(folder, spark_session._jsparkSession)._java_obj
|
|
307
|
+
return AutoGGUFReranker(java_model=jModel)
|
|
308
|
+
|
|
309
|
+
@staticmethod
|
|
310
|
+
def pretrained(name="bge-reranker-v2-m3-Q4_K_M", lang="en", remote_loc=None):
|
|
311
|
+
"""Downloads and loads a pretrained model.
|
|
312
|
+
|
|
313
|
+
Parameters
|
|
314
|
+
----------
|
|
315
|
+
name : str, optional
|
|
316
|
+
Name of the pretrained model, by default "bge-reranker-v2-m3-Q4_K_M"
|
|
317
|
+
lang : str, optional
|
|
318
|
+
Language of the pretrained model, by default "en"
|
|
319
|
+
remote_loc : str, optional
|
|
320
|
+
Optional remote address of the resource, by default None. Will use
|
|
321
|
+
Spark NLPs repositories otherwise.
|
|
322
|
+
|
|
323
|
+
Returns
|
|
324
|
+
-------
|
|
325
|
+
AutoGGUFReranker
|
|
326
|
+
The restored model
|
|
327
|
+
"""
|
|
328
|
+
from sparknlp.pretrained import ResourceDownloader
|
|
329
|
+
return ResourceDownloader.downloadModel(AutoGGUFReranker, name, lang, remote_loc)
|
|
@@ -43,7 +43,7 @@ class AutoGGUFVisionModel(AnnotatorModel, HasBatchedAnnotate, HasLlamaCppPropert
|
|
|
43
43
|
.setOutputCol("completions")
|
|
44
44
|
|
|
45
45
|
|
|
46
|
-
The default model is ``"
|
|
46
|
+
The default model is ``"Qwen2.5_VL_3B_Instruct_Q4_K_M_gguf"``, if no name is provided.
|
|
47
47
|
|
|
48
48
|
For available pretrained models please see the `Models Hub <https://sparknlp.org/models>`__.
|
|
49
49
|
|
|
@@ -116,8 +116,6 @@ class AutoGGUFVisionModel(AnnotatorModel, HasBatchedAnnotate, HasLlamaCppPropert
|
|
|
116
116
|
Set optimization strategies that help on some NUMA systems (if available)
|
|
117
117
|
ropeScalingType
|
|
118
118
|
Set the RoPE frequency scaling method, defaults to linear unless specified by the model
|
|
119
|
-
poolingType
|
|
120
|
-
Set the pooling type for embeddings, use model default if unspecified
|
|
121
119
|
modelDraft
|
|
122
120
|
Set the draft model for speculative decoding
|
|
123
121
|
modelAlias
|
|
@@ -126,8 +124,6 @@ class AutoGGUFVisionModel(AnnotatorModel, HasBatchedAnnotate, HasLlamaCppPropert
|
|
|
126
124
|
Set path to static lookup cache to use for lookup decoding (not updated by generation)
|
|
127
125
|
lookupCacheDynamicFilePath
|
|
128
126
|
Set path to dynamic lookup cache to use for lookup decoding (updated by generation)
|
|
129
|
-
embedding
|
|
130
|
-
Whether to load model with embedding support
|
|
131
127
|
flashAttention
|
|
132
128
|
Whether to enable Flash Attention
|
|
133
129
|
inputPrefixBos
|
|
@@ -284,8 +280,10 @@ class AutoGGUFVisionModel(AnnotatorModel, HasBatchedAnnotate, HasLlamaCppPropert
|
|
|
284
280
|
useChatTemplate=True,
|
|
285
281
|
nCtx=4096,
|
|
286
282
|
nBatch=512,
|
|
287
|
-
|
|
288
|
-
|
|
283
|
+
nPredict=100,
|
|
284
|
+
nGpuLayers=99,
|
|
285
|
+
systemPrompt="You are a helpful assistant.",
|
|
286
|
+
batchSize=2,
|
|
289
287
|
)
|
|
290
288
|
|
|
291
289
|
@staticmethod
|
|
@@ -311,13 +309,13 @@ class AutoGGUFVisionModel(AnnotatorModel, HasBatchedAnnotate, HasLlamaCppPropert
|
|
|
311
309
|
return AutoGGUFVisionModel(java_model=jModel)
|
|
312
310
|
|
|
313
311
|
@staticmethod
|
|
314
|
-
def pretrained(name="
|
|
312
|
+
def pretrained(name="Qwen2.5_VL_3B_Instruct_Q4_K_M_gguf", lang="en", remote_loc=None):
|
|
315
313
|
"""Downloads and loads a pretrained model.
|
|
316
314
|
|
|
317
315
|
Parameters
|
|
318
316
|
----------
|
|
319
317
|
name : str, optional
|
|
320
|
-
Name of the pretrained model, by default "
|
|
318
|
+
Name of the pretrained model, by default "Qwen2.5_VL_3B_Instruct_Q4_K_M_gguf"
|
|
321
319
|
lang : str, optional
|
|
322
320
|
Language of the pretrained model, by default "en"
|
|
323
321
|
remote_loc : str, optional
|
sparknlp/common/properties.py
CHANGED
|
@@ -628,7 +628,6 @@ class HasGeneratorProperties:
|
|
|
628
628
|
"The number of sequences to return from the beam search.",
|
|
629
629
|
typeConverter=TypeConverters.toInt)
|
|
630
630
|
|
|
631
|
-
|
|
632
631
|
def setTask(self, value):
|
|
633
632
|
"""Sets the transformer's task, e.g. ``summarize:``.
|
|
634
633
|
|
|
@@ -639,7 +638,6 @@ class HasGeneratorProperties:
|
|
|
639
638
|
"""
|
|
640
639
|
return self._set(task=value)
|
|
641
640
|
|
|
642
|
-
|
|
643
641
|
def setMinOutputLength(self, value):
|
|
644
642
|
"""Sets minimum length of the sequence to be generated.
|
|
645
643
|
|
|
@@ -650,7 +648,6 @@ class HasGeneratorProperties:
|
|
|
650
648
|
"""
|
|
651
649
|
return self._set(minOutputLength=value)
|
|
652
650
|
|
|
653
|
-
|
|
654
651
|
def setMaxOutputLength(self, value):
|
|
655
652
|
"""Sets maximum length of output text.
|
|
656
653
|
|
|
@@ -661,7 +658,6 @@ class HasGeneratorProperties:
|
|
|
661
658
|
"""
|
|
662
659
|
return self._set(maxOutputLength=value)
|
|
663
660
|
|
|
664
|
-
|
|
665
661
|
def setDoSample(self, value):
|
|
666
662
|
"""Sets whether or not to use sampling, use greedy decoding otherwise.
|
|
667
663
|
|
|
@@ -672,7 +668,6 @@ class HasGeneratorProperties:
|
|
|
672
668
|
"""
|
|
673
669
|
return self._set(doSample=value)
|
|
674
670
|
|
|
675
|
-
|
|
676
671
|
def setTemperature(self, value):
|
|
677
672
|
"""Sets the value used to module the next token probabilities.
|
|
678
673
|
|
|
@@ -683,7 +678,6 @@ class HasGeneratorProperties:
|
|
|
683
678
|
"""
|
|
684
679
|
return self._set(temperature=value)
|
|
685
680
|
|
|
686
|
-
|
|
687
681
|
def setTopK(self, value):
|
|
688
682
|
"""Sets the number of highest probability vocabulary tokens to keep for
|
|
689
683
|
top-k-filtering.
|
|
@@ -695,7 +689,6 @@ class HasGeneratorProperties:
|
|
|
695
689
|
"""
|
|
696
690
|
return self._set(topK=value)
|
|
697
691
|
|
|
698
|
-
|
|
699
692
|
def setTopP(self, value):
|
|
700
693
|
"""Sets the top cumulative probability for vocabulary tokens.
|
|
701
694
|
|
|
@@ -709,7 +702,6 @@ class HasGeneratorProperties:
|
|
|
709
702
|
"""
|
|
710
703
|
return self._set(topP=value)
|
|
711
704
|
|
|
712
|
-
|
|
713
705
|
def setRepetitionPenalty(self, value):
|
|
714
706
|
"""Sets the parameter for repetition penalty. 1.0 means no penalty.
|
|
715
707
|
|
|
@@ -725,7 +717,6 @@ class HasGeneratorProperties:
|
|
|
725
717
|
"""
|
|
726
718
|
return self._set(repetitionPenalty=value)
|
|
727
719
|
|
|
728
|
-
|
|
729
720
|
def setNoRepeatNgramSize(self, value):
|
|
730
721
|
"""Sets size of n-grams that can only occur once.
|
|
731
722
|
|
|
@@ -738,7 +729,6 @@ class HasGeneratorProperties:
|
|
|
738
729
|
"""
|
|
739
730
|
return self._set(noRepeatNgramSize=value)
|
|
740
731
|
|
|
741
|
-
|
|
742
732
|
def setBeamSize(self, value):
|
|
743
733
|
"""Sets the number of beam size for beam search.
|
|
744
734
|
|
|
@@ -749,7 +739,6 @@ class HasGeneratorProperties:
|
|
|
749
739
|
"""
|
|
750
740
|
return self._set(beamSize=value)
|
|
751
741
|
|
|
752
|
-
|
|
753
742
|
def setNReturnSequences(self, value):
|
|
754
743
|
"""Sets the number of sequences to return from the beam search.
|
|
755
744
|
|
|
@@ -845,11 +834,10 @@ class HasLlamaCppProperties:
|
|
|
845
834
|
typeConverter=TypeConverters.toString)
|
|
846
835
|
# Set the pooling type for embeddings, use model default if unspecified
|
|
847
836
|
#
|
|
848
|
-
# -
|
|
849
|
-
# -
|
|
850
|
-
# -
|
|
851
|
-
# -
|
|
852
|
-
# - 4 RANK: For reranked models
|
|
837
|
+
# - MEAN: Mean Pooling
|
|
838
|
+
# - CLS: CLS Pooling
|
|
839
|
+
# - LAST: Last token pooling
|
|
840
|
+
# - RANK: For reranked models
|
|
853
841
|
poolingType = Param(Params._dummy(), "poolingType",
|
|
854
842
|
"Set the pooling type for embeddings, use model default if unspecified",
|
|
855
843
|
typeConverter=TypeConverters.toString)
|
|
@@ -882,6 +870,10 @@ class HasLlamaCppProperties:
|
|
|
882
870
|
typeConverter=TypeConverters.toString)
|
|
883
871
|
chatTemplate = Param(Params._dummy(), "chatTemplate", "The chat template to use",
|
|
884
872
|
typeConverter=TypeConverters.toString)
|
|
873
|
+
logVerbosity = Param(Params._dummy(), "logVerbosity", "Set the log verbosity level",
|
|
874
|
+
typeConverter=TypeConverters.toInt)
|
|
875
|
+
disableLog = Param(Params._dummy(), "disableLog", "Whether to disable logging",
|
|
876
|
+
typeConverter=TypeConverters.toBoolean)
|
|
885
877
|
|
|
886
878
|
# -------- INFERENCE PARAMETERS --------
|
|
887
879
|
inputPrefix = Param(Params._dummy(), "inputPrefix", "Set the prompt to start generation with",
|
|
@@ -1082,10 +1074,10 @@ class HasLlamaCppProperties:
|
|
|
1082
1074
|
ropeScalingTypeUpper = ropeScalingType.upper()
|
|
1083
1075
|
ropeScalingTypes = ["NONE", "LINEAR", "YARN"]
|
|
1084
1076
|
if ropeScalingTypeUpper not in ropeScalingTypes:
|
|
1085
|
-
|
|
1086
|
-
|
|
1087
|
-
|
|
1088
|
-
|
|
1077
|
+
raise ValueError(
|
|
1078
|
+
f"Invalid RoPE scaling type: {ropeScalingType}. "
|
|
1079
|
+
+ f"Valid values are: {ropeScalingTypes}"
|
|
1080
|
+
)
|
|
1089
1081
|
return self._set(ropeScalingType=ropeScalingTypeUpper)
|
|
1090
1082
|
|
|
1091
1083
|
def setPoolingType(self, poolingType: str):
|
|
@@ -1093,11 +1085,10 @@ class HasLlamaCppProperties:
|
|
|
1093
1085
|
|
|
1094
1086
|
Possible values:
|
|
1095
1087
|
|
|
1096
|
-
-
|
|
1097
|
-
-
|
|
1098
|
-
-
|
|
1099
|
-
-
|
|
1100
|
-
- 4 RANK: For reranked models
|
|
1088
|
+
- MEAN: Mean Pooling
|
|
1089
|
+
- CLS: CLS Pooling
|
|
1090
|
+
- LAST: Last token pooling
|
|
1091
|
+
- RANK: For reranked models
|
|
1101
1092
|
"""
|
|
1102
1093
|
poolingTypeUpper = poolingType.upper()
|
|
1103
1094
|
poolingTypes = ["NONE", "MEAN", "CLS", "LAST", "RANK"]
|
|
@@ -1124,10 +1115,6 @@ class HasLlamaCppProperties:
|
|
|
1124
1115
|
# """Set path to dynamic lookup cache to use for lookup decoding (updated by generation)"""
|
|
1125
1116
|
# return self._set(lookupCacheDynamicFilePath=lookupCacheDynamicFilePath)
|
|
1126
1117
|
|
|
1127
|
-
def setEmbedding(self, embedding: bool):
|
|
1128
|
-
"""Whether to load model with embedding support"""
|
|
1129
|
-
return self._set(embedding=embedding)
|
|
1130
|
-
|
|
1131
1118
|
def setFlashAttention(self, flashAttention: bool):
|
|
1132
1119
|
"""Whether to enable Flash Attention"""
|
|
1133
1120
|
return self._set(flashAttention=flashAttention)
|
|
@@ -1280,11 +1267,19 @@ class HasLlamaCppProperties:
|
|
|
1280
1267
|
def setUseChatTemplate(self, useChatTemplate: bool):
|
|
1281
1268
|
"""Set whether generate should apply a chat template"""
|
|
1282
1269
|
return self._set(useChatTemplate=useChatTemplate)
|
|
1283
|
-
|
|
1270
|
+
|
|
1284
1271
|
def setNParallel(self, nParallel: int):
|
|
1285
1272
|
"""Sets the number of parallel processes for decoding. This is an alias for `setBatchSize`."""
|
|
1286
1273
|
return self.setBatchSize(nParallel)
|
|
1287
1274
|
|
|
1275
|
+
def setLogVerbosity(self, logVerbosity: int):
|
|
1276
|
+
"""Set the log verbosity level"""
|
|
1277
|
+
return self._set(logVerbosity=logVerbosity)
|
|
1278
|
+
|
|
1279
|
+
def setDisableLog(self, disableLog: bool):
|
|
1280
|
+
"""Whether to disable logging"""
|
|
1281
|
+
return self._set(disableLog=disableLog)
|
|
1282
|
+
|
|
1288
1283
|
# -------- JAVA SETTERS --------
|
|
1289
1284
|
def setTokenIdBias(self, tokenIdBias: Dict[int, float]):
|
|
1290
1285
|
"""Set token id bias"""
|
sparknlp/internal/__init__.py
CHANGED
|
@@ -1191,4 +1191,9 @@ class _Phi4Loader(ExtendedJavaWrapper):
|
|
|
1191
1191
|
path,
|
|
1192
1192
|
jspark,
|
|
1193
1193
|
use_openvino,
|
|
1194
|
-
)
|
|
1194
|
+
)
|
|
1195
|
+
|
|
1196
|
+
class _AutoGGUFRerankerLoader(ExtendedJavaWrapper):
|
|
1197
|
+
def __init__(self, path, jspark):
|
|
1198
|
+
super(_AutoGGUFRerankerLoader, self).__init__(
|
|
1199
|
+
"com.johnsnowlabs.nlp.annotators.seq2seq.AutoGGUFReranker.loadSavedModel", path, jspark)
|
sparknlp/reader/reader2doc.py
CHANGED
|
@@ -25,7 +25,7 @@ class Reader2Doc(
|
|
|
25
25
|
HasExcelReaderProperties,
|
|
26
26
|
HasHTMLReaderProperties,
|
|
27
27
|
HasPowerPointProperties,
|
|
28
|
-
HasTextReaderProperties
|
|
28
|
+
HasTextReaderProperties
|
|
29
29
|
):
|
|
30
30
|
"""
|
|
31
31
|
The Reader2Doc annotator allows you to use reading files more smoothly within existing
|
|
@@ -36,7 +36,7 @@ class Reader2Doc(
|
|
|
36
36
|
output as a structured Spark DataFrame.
|
|
37
37
|
|
|
38
38
|
Supported formats include:
|
|
39
|
-
|
|
39
|
+
|
|
40
40
|
- Plain text
|
|
41
41
|
- HTML
|
|
42
42
|
- Word (.doc/.docx)
|
|
@@ -77,42 +77,49 @@ class Reader2Doc(
|
|
|
77
77
|
Params._dummy(),
|
|
78
78
|
"contentPath",
|
|
79
79
|
"contentPath path to files to read",
|
|
80
|
-
typeConverter=TypeConverters.toString
|
|
80
|
+
typeConverter=TypeConverters.toString
|
|
81
81
|
)
|
|
82
82
|
|
|
83
83
|
outputCol = Param(
|
|
84
84
|
Params._dummy(),
|
|
85
85
|
"outputCol",
|
|
86
86
|
"output column name",
|
|
87
|
-
typeConverter=TypeConverters.toString
|
|
87
|
+
typeConverter=TypeConverters.toString
|
|
88
88
|
)
|
|
89
89
|
|
|
90
90
|
contentType = Param(
|
|
91
91
|
Params._dummy(),
|
|
92
92
|
"contentType",
|
|
93
93
|
"Set the content type to load following MIME specification",
|
|
94
|
-
typeConverter=TypeConverters.toString
|
|
94
|
+
typeConverter=TypeConverters.toString
|
|
95
95
|
)
|
|
96
96
|
|
|
97
97
|
explodeDocs = Param(
|
|
98
98
|
Params._dummy(),
|
|
99
99
|
"explodeDocs",
|
|
100
100
|
"whether to explode the documents into separate rows",
|
|
101
|
-
typeConverter=TypeConverters.toBoolean
|
|
101
|
+
typeConverter=TypeConverters.toBoolean
|
|
102
102
|
)
|
|
103
103
|
|
|
104
104
|
flattenOutput = Param(
|
|
105
105
|
Params._dummy(),
|
|
106
106
|
"flattenOutput",
|
|
107
107
|
"If true, output is flattened to plain text with minimal metadata",
|
|
108
|
-
typeConverter=TypeConverters.toBoolean
|
|
108
|
+
typeConverter=TypeConverters.toBoolean
|
|
109
109
|
)
|
|
110
110
|
|
|
111
111
|
titleThreshold = Param(
|
|
112
112
|
Params._dummy(),
|
|
113
113
|
"titleThreshold",
|
|
114
114
|
"Minimum font size threshold for title detection in PDF docs",
|
|
115
|
-
typeConverter=TypeConverters.toFloat
|
|
115
|
+
typeConverter=TypeConverters.toFloat
|
|
116
|
+
)
|
|
117
|
+
|
|
118
|
+
outputFormat = Param(
|
|
119
|
+
Params._dummy(),
|
|
120
|
+
"outputFormat",
|
|
121
|
+
"Output format for the table content. Options are 'plain-text' or 'html-table'. Default is 'json-table'.",
|
|
122
|
+
typeConverter=TypeConverters.toString
|
|
116
123
|
)
|
|
117
124
|
|
|
118
125
|
@keyword_only
|
|
@@ -126,7 +133,6 @@ class Reader2Doc(
|
|
|
126
133
|
titleThreshold=18
|
|
127
134
|
)
|
|
128
135
|
@keyword_only
|
|
129
|
-
|
|
130
136
|
def setParams(self):
|
|
131
137
|
kwargs = self._input_kwargs
|
|
132
138
|
return self._set(**kwargs)
|
|
@@ -192,3 +198,13 @@ class Reader2Doc(
|
|
|
192
198
|
Minimum font size threshold for title detection in PDF docs
|
|
193
199
|
"""
|
|
194
200
|
return self._set(titleThreshold=value)
|
|
201
|
+
|
|
202
|
+
def setOutputFormat(self, value):
|
|
203
|
+
"""Sets the output format for the table content.
|
|
204
|
+
|
|
205
|
+
Parameters
|
|
206
|
+
----------
|
|
207
|
+
value : str
|
|
208
|
+
Output format for the table content. Options are 'plain-text' or 'html-table'. Default is 'json-table'.
|
|
209
|
+
"""
|
|
210
|
+
return self._set(outputFormat=value)
|
|
@@ -0,0 +1,163 @@
|
|
|
1
|
+
# Copyright 2017-2025 John Snow Labs
|
|
2
|
+
#
|
|
3
|
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
4
|
+
# you may not use this file except in compliance with the License.
|
|
5
|
+
# You may obtain a copy of the License at
|
|
6
|
+
#
|
|
7
|
+
# http://www.apache.org/licenses/LICENSE-2.0
|
|
8
|
+
#
|
|
9
|
+
# Unless required by applicable law or agreed to in writing, software
|
|
10
|
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
11
|
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
12
|
+
# See the License for the specific language governing permissions and
|
|
13
|
+
# limitations under the License.
|
|
14
|
+
|
|
15
|
+
from pyspark import keyword_only
|
|
16
|
+
from pyspark.ml.param import TypeConverters, Params, Param
|
|
17
|
+
|
|
18
|
+
from sparknlp.common import AnnotatorType
|
|
19
|
+
from sparknlp.internal import AnnotatorTransformer
|
|
20
|
+
from sparknlp.partition.partition_properties import *
|
|
21
|
+
|
|
22
|
+
class Reader2Table(
|
|
23
|
+
AnnotatorTransformer,
|
|
24
|
+
HasEmailReaderProperties,
|
|
25
|
+
HasExcelReaderProperties,
|
|
26
|
+
HasHTMLReaderProperties,
|
|
27
|
+
HasPowerPointProperties,
|
|
28
|
+
HasTextReaderProperties
|
|
29
|
+
):
|
|
30
|
+
name = 'Reader2Table'
|
|
31
|
+
|
|
32
|
+
outputAnnotatorType = AnnotatorType.DOCUMENT
|
|
33
|
+
|
|
34
|
+
contentPath = Param(
|
|
35
|
+
Params._dummy(),
|
|
36
|
+
"contentPath",
|
|
37
|
+
"contentPath path to files to read",
|
|
38
|
+
typeConverter=TypeConverters.toString
|
|
39
|
+
)
|
|
40
|
+
|
|
41
|
+
outputCol = Param(
|
|
42
|
+
Params._dummy(),
|
|
43
|
+
"outputCol",
|
|
44
|
+
"output column name",
|
|
45
|
+
typeConverter=TypeConverters.toString
|
|
46
|
+
)
|
|
47
|
+
|
|
48
|
+
contentType = Param(
|
|
49
|
+
Params._dummy(),
|
|
50
|
+
"contentType",
|
|
51
|
+
"Set the content type to load following MIME specification",
|
|
52
|
+
typeConverter=TypeConverters.toString
|
|
53
|
+
)
|
|
54
|
+
|
|
55
|
+
explodeDocs = Param(
|
|
56
|
+
Params._dummy(),
|
|
57
|
+
"explodeDocs",
|
|
58
|
+
"whether to explode the documents into separate rows",
|
|
59
|
+
typeConverter=TypeConverters.toBoolean
|
|
60
|
+
)
|
|
61
|
+
|
|
62
|
+
flattenOutput = Param(
|
|
63
|
+
Params._dummy(),
|
|
64
|
+
"flattenOutput",
|
|
65
|
+
"If true, output is flattened to plain text with minimal metadata",
|
|
66
|
+
typeConverter=TypeConverters.toBoolean
|
|
67
|
+
)
|
|
68
|
+
|
|
69
|
+
titleThreshold = Param(
|
|
70
|
+
Params._dummy(),
|
|
71
|
+
"titleThreshold",
|
|
72
|
+
"Minimum font size threshold for title detection in PDF docs",
|
|
73
|
+
typeConverter=TypeConverters.toFloat
|
|
74
|
+
)
|
|
75
|
+
|
|
76
|
+
outputFormat = Param(
|
|
77
|
+
Params._dummy(),
|
|
78
|
+
"outputFormat",
|
|
79
|
+
"Output format for the table content. Options are 'plain-text' or 'html-table'. Default is 'json-table'.",
|
|
80
|
+
typeConverter=TypeConverters.toString
|
|
81
|
+
)
|
|
82
|
+
|
|
83
|
+
@keyword_only
|
|
84
|
+
def __init__(self):
|
|
85
|
+
super(Reader2Table, self).__init__(classname="com.johnsnowlabs.reader.Reader2Table")
|
|
86
|
+
self._setDefault(outputCol="document")
|
|
87
|
+
|
|
88
|
+
@keyword_only
|
|
89
|
+
def setParams(self):
|
|
90
|
+
kwargs = self._input_kwargs
|
|
91
|
+
return self._set(**kwargs)
|
|
92
|
+
|
|
93
|
+
def setContentPath(self, value):
|
|
94
|
+
"""Sets content path.
|
|
95
|
+
|
|
96
|
+
Parameters
|
|
97
|
+
----------
|
|
98
|
+
value : str
|
|
99
|
+
contentPath path to files to read
|
|
100
|
+
"""
|
|
101
|
+
return self._set(contentPath=value)
|
|
102
|
+
|
|
103
|
+
def setContentType(self, value):
|
|
104
|
+
"""
|
|
105
|
+
Set the content type to load following MIME specification
|
|
106
|
+
|
|
107
|
+
Parameters
|
|
108
|
+
----------
|
|
109
|
+
value : str
|
|
110
|
+
content type to load following MIME specification
|
|
111
|
+
"""
|
|
112
|
+
return self._set(contentType=value)
|
|
113
|
+
|
|
114
|
+
def setExplodeDocs(self, value):
|
|
115
|
+
"""Sets whether to explode the documents into separate rows.
|
|
116
|
+
|
|
117
|
+
|
|
118
|
+
Parameters
|
|
119
|
+
----------
|
|
120
|
+
value : boolean
|
|
121
|
+
Whether to explode the documents into separate rows
|
|
122
|
+
"""
|
|
123
|
+
return self._set(explodeDocs=value)
|
|
124
|
+
|
|
125
|
+
def setOutputCol(self, value):
|
|
126
|
+
"""Sets output column name.
|
|
127
|
+
|
|
128
|
+
Parameters
|
|
129
|
+
----------
|
|
130
|
+
value : str
|
|
131
|
+
Name of the Output Column
|
|
132
|
+
"""
|
|
133
|
+
return self._set(outputCol=value)
|
|
134
|
+
|
|
135
|
+
def setFlattenOutput(self, value):
|
|
136
|
+
"""Sets whether to flatten the output to plain text with minimal metadata.
|
|
137
|
+
|
|
138
|
+
Parameters
|
|
139
|
+
----------
|
|
140
|
+
value : bool
|
|
141
|
+
If true, output is flattened to plain text with minimal metadata
|
|
142
|
+
"""
|
|
143
|
+
return self._set(flattenOutput=value)
|
|
144
|
+
|
|
145
|
+
def setTitleThreshold(self, value):
|
|
146
|
+
"""Sets the minimum font size threshold for title detection in PDF documents.
|
|
147
|
+
|
|
148
|
+
Parameters
|
|
149
|
+
----------
|
|
150
|
+
value : float
|
|
151
|
+
Minimum font size threshold for title detection in PDF docs
|
|
152
|
+
"""
|
|
153
|
+
return self._set(titleThreshold=value)
|
|
154
|
+
|
|
155
|
+
def setOutputFormat(self, value):
|
|
156
|
+
"""Sets the output format for the table content.
|
|
157
|
+
|
|
158
|
+
Parameters
|
|
159
|
+
----------
|
|
160
|
+
value : str
|
|
161
|
+
Output format for the table content. Options are 'plain-text' or 'html-table'. Default is 'json-table'.
|
|
162
|
+
"""
|
|
163
|
+
return self._set(outputFormat=value)
|
|
File without changes
|
|
File without changes
|