spark-nlp 6.1.0__py2.py3-none-any.whl → 6.1.2rc1__py2.py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of spark-nlp might be problematic. Click here for more details.

@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: spark-nlp
3
- Version: 6.1.0
3
+ Version: 6.1.2rc1
4
4
  Summary: John Snow Labs Spark NLP is a natural language processing library built on top of Apache Spark ML. It provides simple, performant & accurate NLP annotations for machine learning pipelines, that scale easily in a distributed environment.
5
5
  Home-page: https://github.com/JohnSnowLabs/spark-nlp
6
6
  Author: John Snow Labs
@@ -58,7 +58,7 @@ Dynamic: summary
58
58
 
59
59
  Spark NLP is a state-of-the-art Natural Language Processing library built on top of Apache Spark. It provides **simple**, **performant** & **accurate** NLP annotations for machine learning pipelines that **scale** easily in a distributed environment.
60
60
 
61
- Spark NLP comes with **83000+** pretrained **pipelines** and **models** in more than **200+** languages.
61
+ Spark NLP comes with **100000+** pretrained **pipelines** and **models** in more than **200+** languages.
62
62
  It also offers tasks such as **Tokenization**, **Word Segmentation**, **Part-of-Speech Tagging**, Word and Sentence **Embeddings**, **Named Entity Recognition**, **Dependency Parsing**, **Spell Checking**, **Text Classification**, **Sentiment Analysis**, **Token Classification**, **Machine Translation** (+180 languages), **Summarization**, **Question Answering**, **Table Question Answering**, **Text Generation**, **Image Classification**, **Image to Text (captioning)**, **Automatic Speech Recognition**, **Zero-Shot Learning**, and many more [NLP tasks](#features).
63
63
 
64
64
  **Spark NLP** is the only open-source NLP library in **production** that offers state-of-the-art transformers such as **BERT**, **CamemBERT**, **ALBERT**, **ELECTRA**, **XLNet**, **DistilBERT**, **RoBERTa**, **DeBERTa**, **XLM-RoBERTa**, **Longformer**, **ELMO**, **Universal Sentence Encoder**, **Llama-2**, **M2M100**, **BART**, **Instructor**, **E5**, **Google T5**, **MarianMT**, **OpenAI GPT2**, **Vision Transformers (ViT)**, **OpenAI Whisper**, **Llama**, **Mistral**, **Phi**, **Qwen2**, and many more not only to **Python** and **R**, but also to **JVM** ecosystem (**Java**, **Scala**, and **Kotlin**) at **scale** by extending **Apache Spark** natively.
@@ -102,7 +102,7 @@ $ java -version
102
102
  $ conda create -n sparknlp python=3.7 -y
103
103
  $ conda activate sparknlp
104
104
  # spark-nlp by default is based on pyspark 3.x
105
- $ pip install spark-nlp==6.1.0 pyspark==3.3.1
105
+ $ pip install spark-nlp==6.1.1 pyspark==3.3.1
106
106
  ```
107
107
 
108
108
  In Python console or Jupyter `Python3` kernel:
@@ -168,11 +168,11 @@ For a quick example of using pipelines and models take a look at our official [d
168
168
 
169
169
  ### Apache Spark Support
170
170
 
171
- Spark NLP *6.1.0* has been built on top of Apache Spark 3.4 while fully supports Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, 3.4.x, and 3.5.x
171
+ Spark NLP *6.1.1* has been built on top of Apache Spark 3.4 while fully supports Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, 3.4.x, and 3.5.x
172
172
 
173
173
  | Spark NLP | Apache Spark 3.5.x | Apache Spark 3.4.x | Apache Spark 3.3.x | Apache Spark 3.2.x | Apache Spark 3.1.x | Apache Spark 3.0.x | Apache Spark 2.4.x | Apache Spark 2.3.x |
174
174
  |-----------|--------------------|--------------------|--------------------|--------------------|--------------------|--------------------|--------------------|--------------------|
175
- | 6.0.x | YES | YES | YES | YES | YES | YES | NO | NO |
175
+ | 6.x.x and up | YES | YES | YES | YES | YES | YES | NO | NO |
176
176
  | 5.5.x | YES | YES | YES | YES | YES | YES | NO | NO |
177
177
  | 5.4.x | YES | YES | YES | YES | YES | YES | NO | NO |
178
178
  | 5.3.x | YES | YES | YES | YES | YES | YES | NO | NO |
@@ -198,7 +198,7 @@ Find out more about 4.x `SparkNLP` versions in our official [documentation](http
198
198
 
199
199
  ### Databricks Support
200
200
 
201
- Spark NLP 6.1.0 has been tested and is compatible with the following runtimes:
201
+ Spark NLP 6.1.1 has been tested and is compatible with the following runtimes:
202
202
 
203
203
  | **CPU** | **GPU** |
204
204
  |--------------------|--------------------|
@@ -206,16 +206,17 @@ Spark NLP 6.1.0 has been tested and is compatible with the following runtimes:
206
206
  | 14.2 / 14.2 ML | 14.2 ML & GPU |
207
207
  | 14.3 / 14.3 ML | 14.3 ML & GPU |
208
208
  | 15.0 / 15.0 ML | 15.0 ML & GPU |
209
- | 15.1 / 15.0 ML | 15.1 ML & GPU |
210
- | 15.2 / 15.0 ML | 15.2 ML & GPU |
211
- | 15.3 / 15.0 ML | 15.3 ML & GPU |
212
- | 15.4 / 15.0 ML | 15.4 ML & GPU |
209
+ | 15.1 / 15.1 ML | 15.1 ML & GPU |
210
+ | 15.2 / 15.2 ML | 15.2 ML & GPU |
211
+ | 15.3 / 15.3 ML | 15.3 ML & GPU |
212
+ | 15.4 / 15.4 ML | 15.4 ML & GPU |
213
+ | 16.4 / 16.4 ML | 16.4 ML & GPU |
213
214
 
214
215
  We are compatible with older runtimes. For a full list check databricks support in our official [documentation](https://sparknlp.org/docs/en/install#databricks-support)
215
216
 
216
217
  ### EMR Support
217
218
 
218
- Spark NLP 6.1.0 has been tested and is compatible with the following EMR releases:
219
+ Spark NLP 6.1.1 has been tested and is compatible with the following EMR releases:
219
220
 
220
221
  | **EMR Release** |
221
222
  |--------------------|
@@ -3,7 +3,7 @@ com/johnsnowlabs/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,
3
3
  com/johnsnowlabs/ml/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
4
4
  com/johnsnowlabs/ml/ai/__init__.py,sha256=YQiK2M7U4d8y5irPy_HB8ae0mSpqS9583MH44pnKJXc,295
5
5
  com/johnsnowlabs/nlp/__init__.py,sha256=DPIVXtONO5xXyOk-HB0-sNiHAcco17NN13zPS_6Uw8c,294
6
- sparknlp/__init__.py,sha256=wxPbTrab8A3tELe8XRaGCfuZ-T8Dc8szbOHXH9ZgLIU,13814
6
+ sparknlp/__init__.py,sha256=sEwJ50P2C-LGn-MZ8oI6jO6N0OutYSL3b6bCVkOo2wc,13814
7
7
  sparknlp/annotation.py,sha256=I5zOxG5vV2RfPZfqN9enT1i4mo6oBcn3Lrzs37QiOiA,5635
8
8
  sparknlp/annotation_audio.py,sha256=iRV_InSVhgvAwSRe9NTbUH9v6OGvTM-FPCpSAKVu0mE,1917
9
9
  sparknlp/annotation_image.py,sha256=xhCe8Ko-77XqWVuuYHFrjKqF6zPd8Z-RY_rmZXNwCXU,2547
@@ -105,7 +105,7 @@ sparknlp/annotator/dependency/dependency_parser.py,sha256=SxyvHPp8Hs1Xnm5X1nLTMi
105
105
  sparknlp/annotator/dependency/typed_dependency_parser.py,sha256=60vPdYkbFk9MPGegg3m9Uik9cMXpMZd8tBvXG39gNww,12456
106
106
  sparknlp/annotator/embeddings/__init__.py,sha256=Aw1oaP5DI0OS6259c0TEZZ6j3VFSvYFEerah5a-udVw,2528
107
107
  sparknlp/annotator/embeddings/albert_embeddings.py,sha256=6Rd1LIn8oFIpq_ALcJh-RUjPEO7Ht8wsHY6JHSFyMkw,9995
108
- sparknlp/annotator/embeddings/auto_gguf_embeddings.py,sha256=IlqkPGOH2lmZvxEyDSGX-G90DtTFOe2Rvujfbg5zvlU,20185
108
+ sparknlp/annotator/embeddings/auto_gguf_embeddings.py,sha256=TRAYbhGS4K8uSpsScvDr6uD3lYdxMpCUjwDMhV_74rM,19977
109
109
  sparknlp/annotator/embeddings/bert_embeddings.py,sha256=HVUjkg56kBcpGZCo-fmPG5uatMDF3swW_lnbpy1SgSI,8463
110
110
  sparknlp/annotator/embeddings/bert_sentence_embeddings.py,sha256=NQy9KuXT9aKsTpYCR5RAeoFWI2YqEGorbdYrf_0KKmw,9148
111
111
  sparknlp/annotator/embeddings/bge_embeddings.py,sha256=ZGbxssjJFaSfbcgqAPV5hsu81SnC0obgCVNOoJkArDA,8105
@@ -168,8 +168,8 @@ sparknlp/annotator/sentiment/__init__.py,sha256=Lq3vKaZS1YATLMg0VNXSVtkWL5q5G9ta
168
168
  sparknlp/annotator/sentiment/sentiment_detector.py,sha256=m545NGU0Xzg_PO6_qIfpli1uZj7JQcyFgqe9R6wAPFI,8154
169
169
  sparknlp/annotator/sentiment/vivekn_sentiment.py,sha256=4rpXWDgzU6ddnbrSCp9VdLb2epCc9oZ3c6XcqxEw8nk,9655
170
170
  sparknlp/annotator/seq2seq/__init__.py,sha256=Aj43G1MuQE0mW7LakCWPjiTkIGl7iHPAnKIwT_DfdIM,1781
171
- sparknlp/annotator/seq2seq/auto_gguf_model.py,sha256=Oah_RvOy9YrvfnnMRMKOGJHnAMYxo0SeczBZsndM3kY,11638
172
- sparknlp/annotator/seq2seq/auto_gguf_vision_model.py,sha256=EYrm8EW7AMq3AoIKPe7Gp6ayBlFpWeg76AsAr4nanqU,15346
171
+ sparknlp/annotator/seq2seq/auto_gguf_model.py,sha256=yhZQHMHfp88rQvLHTWyS-8imZrwqp-8RQQwnw6PmHfc,11749
172
+ sparknlp/annotator/seq2seq/auto_gguf_vision_model.py,sha256=swBek2026dW6BOX5O9P8Uq41X2GC71VGW0ADFeUIvs0,15299
173
173
  sparknlp/annotator/seq2seq/bart_transformer.py,sha256=I1flM4yeCzEAKOdQllBC30XuedxVJ7ferkFhZ6gwEbE,18481
174
174
  sparknlp/annotator/seq2seq/cohere_transformer.py,sha256=43LZBVazZMgJRCsN7HaYjVYfJ5hRMV95QZyxMtXq-m4,13496
175
175
  sparknlp/annotator/seq2seq/cpm_transformer.py,sha256=0CnBFMlxMu0pD2QZMHyoGtIYgXqfUQm68vr6zEAa6Eg,13290
@@ -223,7 +223,7 @@ sparknlp/common/annotator_properties.py,sha256=7B1os7pBUfHo6b7IPQAXQ-nir0u3tQLzD
223
223
  sparknlp/common/annotator_type.py,sha256=ash2Ip1IOOiJamPVyy_XQj8Ja_DRHm0b9Vj4Ni75oKM,1225
224
224
  sparknlp/common/coverage_result.py,sha256=No4PSh1HSs3PyRI1zC47x65tWgfirqPI290icHQoXEI,823
225
225
  sparknlp/common/match_strategy.py,sha256=kt1MUPqU1wCwk5qCdYk6jubHbU-5yfAYxb9jjAOrdnY,1678
226
- sparknlp/common/properties.py,sha256=4jDyxr2IGWEuNlGtOoPzqdCF7oLAKGy1z6MtqxUVMug,52704
226
+ sparknlp/common/properties.py,sha256=7eBxODxKmFQAgOtrxUH9ly4LugUlkNRVXNQcM60AUK4,53025
227
227
  sparknlp/common/read_as.py,sha256=imxPGwV7jr4Li_acbo0OAHHRGCBbYv-akzEGaBWEfcY,1226
228
228
  sparknlp/common/recursive_annotator_approach.py,sha256=vqugBw22cE3Ff7PIpRlnYFuOlchgL0nM26D8j-NdpqU,1449
229
229
  sparknlp/common/storage.py,sha256=D91H3p8EIjNspjqAYu6ephRpCUtdcAir4_PrAbkIQWE,4842
@@ -247,7 +247,8 @@ sparknlp/pretrained/utils.py,sha256=T1MrvW_DaWk_jcOjVLOea0NMFE9w8fe0ZT_5urZ_nEY,
247
247
  sparknlp/reader/__init__.py,sha256=-Toj3AIBki-zXPpV8ezFTI2LX1yP_rK2bhpoa8nBkTw,685
248
248
  sparknlp/reader/enums.py,sha256=MNGug9oJ1BBLM1Pbske13kAabalDzHa2kucF5xzFpHs,770
249
249
  sparknlp/reader/pdf_to_text.py,sha256=eWw-cwjosmcSZ9eHso0F5QQoeGBBnwsOhzhCXXvMjZA,7169
250
- sparknlp/reader/reader2doc.py,sha256=xahxkEuNM21mb0-MHQoYLtDF1cbAYrMTRpN1-u5K3ec,6587
250
+ sparknlp/reader/reader2doc.py,sha256=LRqfaL9nidhlPkJIwTJo7SnGYmNNfOqwEdrsWYGEdnI,7146
251
+ sparknlp/reader/reader2table.py,sha256=GC6Yz0gQ83S6XKOi329TUNQuAvLrBxysqDkDRZPvcYA,4759
251
252
  sparknlp/reader/sparknlp_reader.py,sha256=MJs8v_ECYaV1SOabI1L_2MkVYEDVImtwgbYypO7DJSY,20623
252
253
  sparknlp/training/__init__.py,sha256=qREi9u-5Vc2VjpL6-XZsyvu5jSEIdIhowW7_kKaqMqo,852
253
254
  sparknlp/training/conll.py,sha256=wKBiSTrjc6mjsl7Nyt6B8f4yXsDJkZb-sn8iOjix9cE,6961
@@ -279,7 +280,7 @@ sparknlp/training/_tf_graph_builders_1x/ner_dl/dataset_encoder.py,sha256=R4yHFN3
279
280
  sparknlp/training/_tf_graph_builders_1x/ner_dl/ner_model.py,sha256=EoCSdcIjqQ3wv13MAuuWrKV8wyVBP0SbOEW41omHlR0,23189
280
281
  sparknlp/training/_tf_graph_builders_1x/ner_dl/ner_model_saver.py,sha256=k5CQ7gKV6HZbZMB8cKLUJuZxoZWlP_DFWdZ--aIDwsc,2356
281
282
  sparknlp/training/_tf_graph_builders_1x/ner_dl/sentence_grouper.py,sha256=pAxjWhjazSX8Vg0MFqJiuRVw1IbnQNSs-8Xp26L4nko,870
282
- spark_nlp-6.1.0.dist-info/METADATA,sha256=MDLwobOveRxQL45CWF-NY26iHa3a7PijF9wntBXpeZE,19722
283
- spark_nlp-6.1.0.dist-info/WHEEL,sha256=JNWh1Fm1UdwIQV075glCn4MVuCRs0sotJIq-J6rbxCU,109
284
- spark_nlp-6.1.0.dist-info/top_level.txt,sha256=uuytur4pyMRw2H_txNY2ZkaucZHUs22QF8-R03ch_-E,13
285
- spark_nlp-6.1.0.dist-info/RECORD,,
283
+ spark_nlp-6.1.2rc1.dist-info/METADATA,sha256=4qK5_LPihfkDmSrLBQgH38R_VE5lzDnsygPpOccUTdc,19777
284
+ spark_nlp-6.1.2rc1.dist-info/WHEEL,sha256=JNWh1Fm1UdwIQV075glCn4MVuCRs0sotJIq-J6rbxCU,109
285
+ spark_nlp-6.1.2rc1.dist-info/top_level.txt,sha256=uuytur4pyMRw2H_txNY2ZkaucZHUs22QF8-R03ch_-E,13
286
+ spark_nlp-6.1.2rc1.dist-info/RECORD,,
sparknlp/__init__.py CHANGED
@@ -66,7 +66,7 @@ sys.modules['com.johnsnowlabs.ml.ai'] = annotator
66
66
  annotators = annotator
67
67
  embeddings = annotator
68
68
 
69
- __version__ = "6.1.0"
69
+ __version__ = "6.1.1"
70
70
 
71
71
 
72
72
  def start(gpu=False,
@@ -12,8 +12,6 @@
12
12
  # See the License for the specific language governing permissions and
13
13
  # limitations under the License.
14
14
  """Contains classes for the AutoGGUFEmbeddings."""
15
- from typing import List
16
-
17
15
  from sparknlp.common import *
18
16
 
19
17
 
@@ -32,7 +30,7 @@ class AutoGGUFEmbeddings(AnnotatorModel, HasBatchedAnnotate):
32
30
  ... .setInputCols(["document"]) \\
33
31
  ... .setOutputCol("embeddings")
34
32
 
35
- The default model is ``"Nomic_Embed_Text_v1.5.Q8_0.gguf"``, if no name is provided.
33
+ The default model is ``"Qwen3_Embedding_0.6B_Q8_0_gguf"``, if no name is provided.
36
34
 
37
35
  For extended examples of usage, see the
38
36
  `AutoGGUFEmbeddingsTest <https://github.com/JohnSnowLabs/spark-nlp/tree/master/src/test/scala/com/johnsnowlabs/nlp/embeddings/AutoGGUFEmbeddingsTest.scala>`__
@@ -313,12 +311,6 @@ class AutoGGUFEmbeddings(AnnotatorModel, HasBatchedAnnotate):
313
311
  "Set the pooling type for embeddings, use model default if unspecified",
314
312
  typeConverter=TypeConverters.toString,
315
313
  )
316
- embedding = Param(
317
- Params._dummy(),
318
- "embedding",
319
- "Whether to load model with embedding support",
320
- typeConverter=TypeConverters.toBoolean,
321
- )
322
314
  flashAttention = Param(
323
315
  Params._dummy(),
324
316
  "flashAttention",
@@ -489,10 +481,10 @@ class AutoGGUFEmbeddings(AnnotatorModel, HasBatchedAnnotate):
489
481
  classname=classname, java_model=java_model
490
482
  )
491
483
  self._setDefault(
492
- embedding=True,
493
484
  nCtx=4096,
494
485
  nBatch=512,
495
486
  poolingType="MEAN",
487
+ nGpuLayers=99,
496
488
  )
497
489
 
498
490
  @staticmethod
@@ -517,13 +509,13 @@ class AutoGGUFEmbeddings(AnnotatorModel, HasBatchedAnnotate):
517
509
  return AutoGGUFEmbeddings(java_model=jModel)
518
510
 
519
511
  @staticmethod
520
- def pretrained(name="Nomic_Embed_Text_v1.5.Q8_0.gguf", lang="en", remote_loc=None):
512
+ def pretrained(name="Qwen3_Embedding_0.6B_Q8_0_gguf", lang="en", remote_loc=None):
521
513
  """Downloads and loads a pretrained model.
522
514
 
523
515
  Parameters
524
516
  ----------
525
517
  name : str, optional
526
- Name of the pretrained model, by default "Nomic_Embed_Text_v1.5.Q8_0.gguf"
518
+ Name of the pretrained model, by default "Qwen3_Embedding_0.6B_Q8_0_gguf"
527
519
  lang : str, optional
528
520
  Language of the pretrained model, by default "en"
529
521
  remote_loc : str, optional
@@ -37,7 +37,11 @@ class AutoGGUFModel(AnnotatorModel, HasBatchedAnnotate, HasLlamaCppProperties):
37
37
  ... .setInputCols(["document"]) \\
38
38
  ... .setOutputCol("completions")
39
39
 
40
- The default model is ``"phi3.5_mini_4k_instruct_q4_gguf"``, if no name is provided.
40
+ The default model is ``"Phi_4_mini_instruct_Q4_K_M_gguf"``, if no name is provided.
41
+
42
+ AutoGGUFModel is also able to load pretrained models from AutoGGUFVisionModel. Just
43
+ specify the same name for the pretrained method, and it will load the text-part of the
44
+ multimodal model automatically.
41
45
 
42
46
  For extended examples of usage, see the
43
47
  `AutoGGUFModelTest <https://github.com/JohnSnowLabs/spark-nlp/tree/master/src/test/scala/com/johnsnowlabs/nlp/annotators/seq2seq/AutoGGUFModelTest.scala>`__
@@ -120,8 +124,6 @@ class AutoGGUFModel(AnnotatorModel, HasBatchedAnnotate, HasLlamaCppProperties):
120
124
  Set path to static lookup cache to use for lookup decoding (not updated by generation)
121
125
  lookupCacheDynamicFilePath
122
126
  Set path to dynamic lookup cache to use for lookup decoding (updated by generation)
123
- embedding
124
- Whether to load model with embedding support
125
127
  flashAttention
126
128
  Whether to enable Flash Attention
127
129
  inputPrefixBos
@@ -252,20 +254,19 @@ class AutoGGUFModel(AnnotatorModel, HasBatchedAnnotate, HasLlamaCppProperties):
252
254
  useChatTemplate=True,
253
255
  nCtx=4096,
254
256
  nBatch=512,
255
- embedding=False,
256
257
  nPredict=100,
257
258
  nGpuLayers=99,
258
259
  systemPrompt="You are a helpful assistant."
259
260
  )
260
261
 
261
262
  @staticmethod
262
- def loadSavedModel(folder, spark_session):
263
+ def loadSavedModel(path, spark_session):
263
264
  """Loads a locally saved model.
264
265
 
265
266
  Parameters
266
267
  ----------
267
- folder : str
268
- Folder of the saved model
268
+ path : str
269
+ Path to the gguf model
269
270
  spark_session : pyspark.sql.SparkSession
270
271
  The current SparkSession
271
272
 
@@ -275,17 +276,17 @@ class AutoGGUFModel(AnnotatorModel, HasBatchedAnnotate, HasLlamaCppProperties):
275
276
  The restored model
276
277
  """
277
278
  from sparknlp.internal import _AutoGGUFLoader
278
- jModel = _AutoGGUFLoader(folder, spark_session._jsparkSession)._java_obj
279
+ jModel = _AutoGGUFLoader(path, spark_session._jsparkSession)._java_obj
279
280
  return AutoGGUFModel(java_model=jModel)
280
281
 
281
282
  @staticmethod
282
- def pretrained(name="phi3.5_mini_4k_instruct_q4_gguf", lang="en", remote_loc=None):
283
+ def pretrained(name="Phi_4_mini_instruct_Q4_K_M_gguf", lang="en", remote_loc=None):
283
284
  """Downloads and loads a pretrained model.
284
285
 
285
286
  Parameters
286
287
  ----------
287
288
  name : str, optional
288
- Name of the pretrained model, by default "phi3.5_mini_4k_instruct_q4_gguf"
289
+ Name of the pretrained model, by default "Phi_4_mini_instruct_Q4_K_M_gguf"
289
290
  lang : str, optional
290
291
  Language of the pretrained model, by default "en"
291
292
  remote_loc : str, optional
@@ -43,7 +43,7 @@ class AutoGGUFVisionModel(AnnotatorModel, HasBatchedAnnotate, HasLlamaCppPropert
43
43
  .setOutputCol("completions")
44
44
 
45
45
 
46
- The default model is ``"llava_v1.5_7b_Q4_0_gguf"``, if no name is provided.
46
+ The default model is ``"Qwen2.5_VL_3B_Instruct_Q4_K_M_gguf"``, if no name is provided.
47
47
 
48
48
  For available pretrained models please see the `Models Hub <https://sparknlp.org/models>`__.
49
49
 
@@ -116,8 +116,6 @@ class AutoGGUFVisionModel(AnnotatorModel, HasBatchedAnnotate, HasLlamaCppPropert
116
116
  Set optimization strategies that help on some NUMA systems (if available)
117
117
  ropeScalingType
118
118
  Set the RoPE frequency scaling method, defaults to linear unless specified by the model
119
- poolingType
120
- Set the pooling type for embeddings, use model default if unspecified
121
119
  modelDraft
122
120
  Set the draft model for speculative decoding
123
121
  modelAlias
@@ -126,8 +124,6 @@ class AutoGGUFVisionModel(AnnotatorModel, HasBatchedAnnotate, HasLlamaCppPropert
126
124
  Set path to static lookup cache to use for lookup decoding (not updated by generation)
127
125
  lookupCacheDynamicFilePath
128
126
  Set path to dynamic lookup cache to use for lookup decoding (updated by generation)
129
- embedding
130
- Whether to load model with embedding support
131
127
  flashAttention
132
128
  Whether to enable Flash Attention
133
129
  inputPrefixBos
@@ -284,8 +280,10 @@ class AutoGGUFVisionModel(AnnotatorModel, HasBatchedAnnotate, HasLlamaCppPropert
284
280
  useChatTemplate=True,
285
281
  nCtx=4096,
286
282
  nBatch=512,
287
- embedding=False,
288
- nPredict=100
283
+ nPredict=100,
284
+ nGpuLayers=99,
285
+ systemPrompt="You are a helpful assistant.",
286
+ batchSize=2,
289
287
  )
290
288
 
291
289
  @staticmethod
@@ -311,13 +309,13 @@ class AutoGGUFVisionModel(AnnotatorModel, HasBatchedAnnotate, HasLlamaCppPropert
311
309
  return AutoGGUFVisionModel(java_model=jModel)
312
310
 
313
311
  @staticmethod
314
- def pretrained(name="llava_v1.5_7b_Q4_0_gguf", lang="en", remote_loc=None):
312
+ def pretrained(name="Qwen2.5_VL_3B_Instruct_Q4_K_M_gguf", lang="en", remote_loc=None):
315
313
  """Downloads and loads a pretrained model.
316
314
 
317
315
  Parameters
318
316
  ----------
319
317
  name : str, optional
320
- Name of the pretrained model, by default "llava_v1.5_7b_Q4_0_gguf"
318
+ Name of the pretrained model, by default "Qwen2.5_VL_3B_Instruct_Q4_K_M_gguf"
321
319
  lang : str, optional
322
320
  Language of the pretrained model, by default "en"
323
321
  remote_loc : str, optional
@@ -628,7 +628,6 @@ class HasGeneratorProperties:
628
628
  "The number of sequences to return from the beam search.",
629
629
  typeConverter=TypeConverters.toInt)
630
630
 
631
-
632
631
  def setTask(self, value):
633
632
  """Sets the transformer's task, e.g. ``summarize:``.
634
633
 
@@ -639,7 +638,6 @@ class HasGeneratorProperties:
639
638
  """
640
639
  return self._set(task=value)
641
640
 
642
-
643
641
  def setMinOutputLength(self, value):
644
642
  """Sets minimum length of the sequence to be generated.
645
643
 
@@ -650,7 +648,6 @@ class HasGeneratorProperties:
650
648
  """
651
649
  return self._set(minOutputLength=value)
652
650
 
653
-
654
651
  def setMaxOutputLength(self, value):
655
652
  """Sets maximum length of output text.
656
653
 
@@ -661,7 +658,6 @@ class HasGeneratorProperties:
661
658
  """
662
659
  return self._set(maxOutputLength=value)
663
660
 
664
-
665
661
  def setDoSample(self, value):
666
662
  """Sets whether or not to use sampling, use greedy decoding otherwise.
667
663
 
@@ -672,7 +668,6 @@ class HasGeneratorProperties:
672
668
  """
673
669
  return self._set(doSample=value)
674
670
 
675
-
676
671
  def setTemperature(self, value):
677
672
  """Sets the value used to module the next token probabilities.
678
673
 
@@ -683,7 +678,6 @@ class HasGeneratorProperties:
683
678
  """
684
679
  return self._set(temperature=value)
685
680
 
686
-
687
681
  def setTopK(self, value):
688
682
  """Sets the number of highest probability vocabulary tokens to keep for
689
683
  top-k-filtering.
@@ -695,7 +689,6 @@ class HasGeneratorProperties:
695
689
  """
696
690
  return self._set(topK=value)
697
691
 
698
-
699
692
  def setTopP(self, value):
700
693
  """Sets the top cumulative probability for vocabulary tokens.
701
694
 
@@ -709,7 +702,6 @@ class HasGeneratorProperties:
709
702
  """
710
703
  return self._set(topP=value)
711
704
 
712
-
713
705
  def setRepetitionPenalty(self, value):
714
706
  """Sets the parameter for repetition penalty. 1.0 means no penalty.
715
707
 
@@ -725,7 +717,6 @@ class HasGeneratorProperties:
725
717
  """
726
718
  return self._set(repetitionPenalty=value)
727
719
 
728
-
729
720
  def setNoRepeatNgramSize(self, value):
730
721
  """Sets size of n-grams that can only occur once.
731
722
 
@@ -738,7 +729,6 @@ class HasGeneratorProperties:
738
729
  """
739
730
  return self._set(noRepeatNgramSize=value)
740
731
 
741
-
742
732
  def setBeamSize(self, value):
743
733
  """Sets the number of beam size for beam search.
744
734
 
@@ -749,7 +739,6 @@ class HasGeneratorProperties:
749
739
  """
750
740
  return self._set(beamSize=value)
751
741
 
752
-
753
742
  def setNReturnSequences(self, value):
754
743
  """Sets the number of sequences to return from the beam search.
755
744
 
@@ -845,11 +834,10 @@ class HasLlamaCppProperties:
845
834
  typeConverter=TypeConverters.toString)
846
835
  # Set the pooling type for embeddings, use model default if unspecified
847
836
  #
848
- # - 0 NONE: Don't use any pooling
849
- # - 1 MEAN: Mean Pooling
850
- # - 2 CLS: CLS Pooling
851
- # - 3 LAST: Last token pooling
852
- # - 4 RANK: For reranked models
837
+ # - MEAN: Mean Pooling
838
+ # - CLS: CLS Pooling
839
+ # - LAST: Last token pooling
840
+ # - RANK: For reranked models
853
841
  poolingType = Param(Params._dummy(), "poolingType",
854
842
  "Set the pooling type for embeddings, use model default if unspecified",
855
843
  typeConverter=TypeConverters.toString)
@@ -882,6 +870,10 @@ class HasLlamaCppProperties:
882
870
  typeConverter=TypeConverters.toString)
883
871
  chatTemplate = Param(Params._dummy(), "chatTemplate", "The chat template to use",
884
872
  typeConverter=TypeConverters.toString)
873
+ logVerbosity = Param(Params._dummy(), "logVerbosity", "Set the log verbosity level",
874
+ typeConverter=TypeConverters.toInt)
875
+ disableLog = Param(Params._dummy(), "disableLog", "Whether to disable logging",
876
+ typeConverter=TypeConverters.toBoolean)
885
877
 
886
878
  # -------- INFERENCE PARAMETERS --------
887
879
  inputPrefix = Param(Params._dummy(), "inputPrefix", "Set the prompt to start generation with",
@@ -1082,10 +1074,10 @@ class HasLlamaCppProperties:
1082
1074
  ropeScalingTypeUpper = ropeScalingType.upper()
1083
1075
  ropeScalingTypes = ["NONE", "LINEAR", "YARN"]
1084
1076
  if ropeScalingTypeUpper not in ropeScalingTypes:
1085
- raise ValueError(
1086
- f"Invalid RoPE scaling type: {ropeScalingType}. "
1087
- + f"Valid values are: {ropeScalingTypes}"
1088
- )
1077
+ raise ValueError(
1078
+ f"Invalid RoPE scaling type: {ropeScalingType}. "
1079
+ + f"Valid values are: {ropeScalingTypes}"
1080
+ )
1089
1081
  return self._set(ropeScalingType=ropeScalingTypeUpper)
1090
1082
 
1091
1083
  def setPoolingType(self, poolingType: str):
@@ -1093,11 +1085,10 @@ class HasLlamaCppProperties:
1093
1085
 
1094
1086
  Possible values:
1095
1087
 
1096
- - 0 NONE: Don't use any pooling
1097
- - 1 MEAN: Mean Pooling
1098
- - 2 CLS: CLS Pooling
1099
- - 3 LAST: Last token pooling
1100
- - 4 RANK: For reranked models
1088
+ - MEAN: Mean Pooling
1089
+ - CLS: CLS Pooling
1090
+ - LAST: Last token pooling
1091
+ - RANK: For reranked models
1101
1092
  """
1102
1093
  poolingTypeUpper = poolingType.upper()
1103
1094
  poolingTypes = ["NONE", "MEAN", "CLS", "LAST", "RANK"]
@@ -1124,10 +1115,6 @@ class HasLlamaCppProperties:
1124
1115
  # """Set path to dynamic lookup cache to use for lookup decoding (updated by generation)"""
1125
1116
  # return self._set(lookupCacheDynamicFilePath=lookupCacheDynamicFilePath)
1126
1117
 
1127
- def setEmbedding(self, embedding: bool):
1128
- """Whether to load model with embedding support"""
1129
- return self._set(embedding=embedding)
1130
-
1131
1118
  def setFlashAttention(self, flashAttention: bool):
1132
1119
  """Whether to enable Flash Attention"""
1133
1120
  return self._set(flashAttention=flashAttention)
@@ -1280,11 +1267,19 @@ class HasLlamaCppProperties:
1280
1267
  def setUseChatTemplate(self, useChatTemplate: bool):
1281
1268
  """Set whether generate should apply a chat template"""
1282
1269
  return self._set(useChatTemplate=useChatTemplate)
1283
-
1270
+
1284
1271
  def setNParallel(self, nParallel: int):
1285
1272
  """Sets the number of parallel processes for decoding. This is an alias for `setBatchSize`."""
1286
1273
  return self.setBatchSize(nParallel)
1287
1274
 
1275
+ def setLogVerbosity(self, logVerbosity: int):
1276
+ """Set the log verbosity level"""
1277
+ return self._set(logVerbosity=logVerbosity)
1278
+
1279
+ def setDisableLog(self, disableLog: bool):
1280
+ """Whether to disable logging"""
1281
+ return self._set(disableLog=disableLog)
1282
+
1288
1283
  # -------- JAVA SETTERS --------
1289
1284
  def setTokenIdBias(self, tokenIdBias: Dict[int, float]):
1290
1285
  """Set token id bias"""
@@ -25,7 +25,7 @@ class Reader2Doc(
25
25
  HasExcelReaderProperties,
26
26
  HasHTMLReaderProperties,
27
27
  HasPowerPointProperties,
28
- HasTextReaderProperties,
28
+ HasTextReaderProperties
29
29
  ):
30
30
  """
31
31
  The Reader2Doc annotator allows you to use reading files more smoothly within existing
@@ -36,7 +36,7 @@ class Reader2Doc(
36
36
  output as a structured Spark DataFrame.
37
37
 
38
38
  Supported formats include:
39
-
39
+
40
40
  - Plain text
41
41
  - HTML
42
42
  - Word (.doc/.docx)
@@ -77,42 +77,49 @@ class Reader2Doc(
77
77
  Params._dummy(),
78
78
  "contentPath",
79
79
  "contentPath path to files to read",
80
- typeConverter=TypeConverters.toString,
80
+ typeConverter=TypeConverters.toString
81
81
  )
82
82
 
83
83
  outputCol = Param(
84
84
  Params._dummy(),
85
85
  "outputCol",
86
86
  "output column name",
87
- typeConverter=TypeConverters.toString,
87
+ typeConverter=TypeConverters.toString
88
88
  )
89
89
 
90
90
  contentType = Param(
91
91
  Params._dummy(),
92
92
  "contentType",
93
93
  "Set the content type to load following MIME specification",
94
- typeConverter=TypeConverters.toString,
94
+ typeConverter=TypeConverters.toString
95
95
  )
96
96
 
97
97
  explodeDocs = Param(
98
98
  Params._dummy(),
99
99
  "explodeDocs",
100
100
  "whether to explode the documents into separate rows",
101
- typeConverter=TypeConverters.toBoolean,
101
+ typeConverter=TypeConverters.toBoolean
102
102
  )
103
103
 
104
104
  flattenOutput = Param(
105
105
  Params._dummy(),
106
106
  "flattenOutput",
107
107
  "If true, output is flattened to plain text with minimal metadata",
108
- typeConverter=TypeConverters.toBoolean,
108
+ typeConverter=TypeConverters.toBoolean
109
109
  )
110
110
 
111
111
  titleThreshold = Param(
112
112
  Params._dummy(),
113
113
  "titleThreshold",
114
114
  "Minimum font size threshold for title detection in PDF docs",
115
- typeConverter=TypeConverters.toFloat,
115
+ typeConverter=TypeConverters.toFloat
116
+ )
117
+
118
+ outputFormat = Param(
119
+ Params._dummy(),
120
+ "outputFormat",
121
+ "Output format for the table content. Options are 'plain-text' or 'html-table'. Default is 'json-table'.",
122
+ typeConverter=TypeConverters.toString
116
123
  )
117
124
 
118
125
  @keyword_only
@@ -126,7 +133,6 @@ class Reader2Doc(
126
133
  titleThreshold=18
127
134
  )
128
135
  @keyword_only
129
-
130
136
  def setParams(self):
131
137
  kwargs = self._input_kwargs
132
138
  return self._set(**kwargs)
@@ -192,3 +198,13 @@ class Reader2Doc(
192
198
  Minimum font size threshold for title detection in PDF docs
193
199
  """
194
200
  return self._set(titleThreshold=value)
201
+
202
+ def setOutputFormat(self, value):
203
+ """Sets the output format for the table content.
204
+
205
+ Parameters
206
+ ----------
207
+ value : str
208
+ Output format for the table content. Options are 'plain-text' or 'html-table'. Default is 'json-table'.
209
+ """
210
+ return self._set(outputFormat=value)
@@ -0,0 +1,163 @@
1
+ # Copyright 2017-2025 John Snow Labs
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ from pyspark import keyword_only
16
+ from pyspark.ml.param import TypeConverters, Params, Param
17
+
18
+ from sparknlp.common import AnnotatorType
19
+ from sparknlp.internal import AnnotatorTransformer
20
+ from sparknlp.partition.partition_properties import *
21
+
22
+ class Reader2Table(
23
+ AnnotatorTransformer,
24
+ HasEmailReaderProperties,
25
+ HasExcelReaderProperties,
26
+ HasHTMLReaderProperties,
27
+ HasPowerPointProperties,
28
+ HasTextReaderProperties
29
+ ):
30
+ name = 'Reader2Table'
31
+
32
+ outputAnnotatorType = AnnotatorType.DOCUMENT
33
+
34
+ contentPath = Param(
35
+ Params._dummy(),
36
+ "contentPath",
37
+ "contentPath path to files to read",
38
+ typeConverter=TypeConverters.toString
39
+ )
40
+
41
+ outputCol = Param(
42
+ Params._dummy(),
43
+ "outputCol",
44
+ "output column name",
45
+ typeConverter=TypeConverters.toString
46
+ )
47
+
48
+ contentType = Param(
49
+ Params._dummy(),
50
+ "contentType",
51
+ "Set the content type to load following MIME specification",
52
+ typeConverter=TypeConverters.toString
53
+ )
54
+
55
+ explodeDocs = Param(
56
+ Params._dummy(),
57
+ "explodeDocs",
58
+ "whether to explode the documents into separate rows",
59
+ typeConverter=TypeConverters.toBoolean
60
+ )
61
+
62
+ flattenOutput = Param(
63
+ Params._dummy(),
64
+ "flattenOutput",
65
+ "If true, output is flattened to plain text with minimal metadata",
66
+ typeConverter=TypeConverters.toBoolean
67
+ )
68
+
69
+ titleThreshold = Param(
70
+ Params._dummy(),
71
+ "titleThreshold",
72
+ "Minimum font size threshold for title detection in PDF docs",
73
+ typeConverter=TypeConverters.toFloat
74
+ )
75
+
76
+ outputFormat = Param(
77
+ Params._dummy(),
78
+ "outputFormat",
79
+ "Output format for the table content. Options are 'plain-text' or 'html-table'. Default is 'json-table'.",
80
+ typeConverter=TypeConverters.toString
81
+ )
82
+
83
+ @keyword_only
84
+ def __init__(self):
85
+ super(Reader2Table, self).__init__(classname="com.johnsnowlabs.reader.Reader2Table")
86
+ self._setDefault(outputCol="document")
87
+
88
+ @keyword_only
89
+ def setParams(self):
90
+ kwargs = self._input_kwargs
91
+ return self._set(**kwargs)
92
+
93
+ def setContentPath(self, value):
94
+ """Sets content path.
95
+
96
+ Parameters
97
+ ----------
98
+ value : str
99
+ contentPath path to files to read
100
+ """
101
+ return self._set(contentPath=value)
102
+
103
+ def setContentType(self, value):
104
+ """
105
+ Set the content type to load following MIME specification
106
+
107
+ Parameters
108
+ ----------
109
+ value : str
110
+ content type to load following MIME specification
111
+ """
112
+ return self._set(contentType=value)
113
+
114
+ def setExplodeDocs(self, value):
115
+ """Sets whether to explode the documents into separate rows.
116
+
117
+
118
+ Parameters
119
+ ----------
120
+ value : boolean
121
+ Whether to explode the documents into separate rows
122
+ """
123
+ return self._set(explodeDocs=value)
124
+
125
+ def setOutputCol(self, value):
126
+ """Sets output column name.
127
+
128
+ Parameters
129
+ ----------
130
+ value : str
131
+ Name of the Output Column
132
+ """
133
+ return self._set(outputCol=value)
134
+
135
+ def setFlattenOutput(self, value):
136
+ """Sets whether to flatten the output to plain text with minimal metadata.
137
+
138
+ Parameters
139
+ ----------
140
+ value : bool
141
+ If true, output is flattened to plain text with minimal metadata
142
+ """
143
+ return self._set(flattenOutput=value)
144
+
145
+ def setTitleThreshold(self, value):
146
+ """Sets the minimum font size threshold for title detection in PDF documents.
147
+
148
+ Parameters
149
+ ----------
150
+ value : float
151
+ Minimum font size threshold for title detection in PDF docs
152
+ """
153
+ return self._set(titleThreshold=value)
154
+
155
+ def setOutputFormat(self, value):
156
+ """Sets the output format for the table content.
157
+
158
+ Parameters
159
+ ----------
160
+ value : str
161
+ Output format for the table content. Options are 'plain-text' or 'html-table'. Default is 'json-table'.
162
+ """
163
+ return self._set(outputFormat=value)