SinaTools 0.1.26__py2.py3-none-any.whl → 0.1.28__py2.py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (34) hide show
  1. SinaTools-0.1.28.dist-info/METADATA +64 -0
  2. {SinaTools-0.1.26.dist-info → SinaTools-0.1.28.dist-info}/RECORD +33 -30
  3. sinatools/CLI/DataDownload/download_files.py +5 -8
  4. sinatools/CLI/morphology/ALMA_multi_word.py +0 -34
  5. sinatools/CLI/morphology/morph_analyzer.py +1 -1
  6. sinatools/CLI/ner/corpus_entity_extractor.py +17 -4
  7. sinatools/CLI/ner/entity_extractor.py +8 -8
  8. sinatools/CLI/utils/implication.py +3 -3
  9. sinatools/CLI/utils/jaccard.py +2 -2
  10. sinatools/DataDownload/downloader.py +2 -2
  11. sinatools/VERSION +1 -1
  12. sinatools/morphology/morph_analyzer.py +44 -45
  13. sinatools/ner/__init__.py +6 -1
  14. sinatools/ner/entity_extractor.py +42 -1
  15. sinatools/ner/relation_extractor.py +201 -0
  16. sinatools/semantic_relatedness/compute_relatedness.py +22 -0
  17. sinatools/synonyms/__init__.py +2 -2
  18. sinatools/synonyms/synonyms_generator.py +45 -1
  19. sinatools/utils/jaccard.py +1 -1
  20. sinatools/utils/parser.py +12 -15
  21. sinatools/utils/similarity.py +240 -0
  22. sinatools/utils/text_dublication_detector.py +22 -0
  23. sinatools/utils/text_transliteration.py +1 -1
  24. sinatools/utils/tokenizer.py +1 -1
  25. sinatools/utils/word_compare.py +667 -0
  26. sinatools/wsd/__init__.py +1 -1
  27. sinatools/wsd/disambiguator.py +20 -19
  28. SinaTools-0.1.26.dist-info/METADATA +0 -34
  29. {SinaTools-0.1.26.data → SinaTools-0.1.28.data}/data/sinatools/environment.yml +0 -0
  30. {SinaTools-0.1.26.dist-info → SinaTools-0.1.28.dist-info}/AUTHORS.rst +0 -0
  31. {SinaTools-0.1.26.dist-info → SinaTools-0.1.28.dist-info}/LICENSE +0 -0
  32. {SinaTools-0.1.26.dist-info → SinaTools-0.1.28.dist-info}/WHEEL +0 -0
  33. {SinaTools-0.1.26.dist-info → SinaTools-0.1.28.dist-info}/entry_points.txt +0 -0
  34. {SinaTools-0.1.26.dist-info → SinaTools-0.1.28.dist-info}/top_level.txt +0 -0
@@ -0,0 +1,64 @@
1
+ Metadata-Version: 2.1
2
+ Name: SinaTools
3
+ Version: 0.1.28
4
+ Summary: Open-source Python toolkit for Arabic Natural Understanding, allowing people to integrate it in their system workflow.
5
+ Home-page: https://github.com/SinaLab/sinatools
6
+ License: MIT license
7
+ Keywords: sinatools
8
+ Platform: UNKNOWN
9
+ Description-Content-Type: text/markdown
10
+ Requires-Dist: six
11
+ Requires-Dist: farasapy
12
+ Requires-Dist: tqdm
13
+ Requires-Dist: requests
14
+ Requires-Dist: regex
15
+ Requires-Dist: pathlib
16
+ Requires-Dist: torch (==1.13.0)
17
+ Requires-Dist: transformers (==4.24.0)
18
+ Requires-Dist: torchtext (==0.14.0)
19
+ Requires-Dist: torchvision (==0.14.0)
20
+ Requires-Dist: seqeval (==1.2.2)
21
+ Requires-Dist: natsort (==7.1.1)
22
+
23
+ SinaTools
24
+ ======================
25
+ Open Source Toolkit for Arabic NLP and NLU developed by [SinaLab](http://sina.birzeit.edu/) at Birzeit University. SinaTools is available through Python APIs, command lines, colabs, and online demos.
26
+
27
+ See the full list of [Available Packages](https://sina.birzeit.edu/sinatools/), which include: (1) [Morphology Tagging](https://sina.birzeit.edu/sinatools/index.html#morph), (2) [Named Entity Recognition (NER)](https://sina.birzeit.edu/sinatools/index.html#ner), (3) [Word Sense Disambiguation (WSD)](https://sina.birzeit.edu/sinatools/index.html#wsd), (4) [Semantic Relatedness](https://sina.birzeit.edu/sinatools/index.html#sr), (5) [Synonymy Extraction and Evaluation](https://sina.birzeit.edu/sinatools/index.html#se), (6) [Relation Extraction](https://sina.birzeit.edu/sinatools/index.html#re), (7) [Utilities](https://sina.birzeit.edu/sinatools/index.html#u) (diacritic-based word matching, Jaccard similarly, parser, tokenizers, corpora processing, transliteration, etc).
28
+
29
+ See [Demo Pages](https://sina.birzeit.edu/sinatools/).
30
+
31
+ See the [benchmarking](https://www.jarrar.info/publications/HJK24.pdf), which shows that SinaTools outperformed all related toolkits.
32
+
33
+ Installation
34
+ --------
35
+ To install SinaTools, ensure you are using Python version 3.10.8, then clone the [GitHub](git://github.com/SinaLab/SinaTools) repository.
36
+
37
+ Alternatively, you can execute the following command:
38
+
39
+ ```bash
40
+ pip install sinatools
41
+ ```
42
+
43
+ Installing Models and Data Files
44
+ --------
45
+ Some modules in SinaTools require some data files and fine-tuned models to be downloaded. To download these models, please consult the [DataDownload](https://sina.birzeit.edu/sinatools/documentation/cli_tools/DataDownload/DataDownload.html).
46
+
47
+ Documentation
48
+ --------
49
+ For information, please refer to the [main page](https://sina.birzeit.edu/sinatools) or the [online domuementation](https://sina.birzeit.edu/sinatools/documentation).
50
+
51
+ Citation
52
+ -------
53
+ Tymaa Hammouda, Mustafa Jarrar, Mohammed Khalilia: [SinaTools: Open Source Toolkit for Arabic Natural Language Understanding](http://www.jarrar.info/publications/HJK24.pdf). In Proceedings of the 2024 AI in Computational Linguistics (ACLing 2024), Procedia Computer Science, Dubai. ELSEVIER.
54
+
55
+ License
56
+ --------
57
+ SinaTools is available under the MIT License. See the [LICENSE](https://github.com/SinaLab/sinatools/blob/main/LICENSE) file for more information.
58
+
59
+ Reporting Issues
60
+ --------
61
+ To report any issues or bugs, please contact us at "sina.institute.bzu@gmail.com" or visit [SinaTools Issues](https://github.com/SinaLab/sinatools/issues).
62
+
63
+
64
+
@@ -1,26 +1,26 @@
1
- SinaTools-0.1.26.data/data/sinatools/environment.yml,sha256=OzilhLjZbo_3nU93EQNUFX-6G5O3newiSWrwxvMH2Os,7231
2
- sinatools/VERSION,sha256=5E6i4X07Go6cKsVD3uEZkX9jXfyE05s7HlzVXSisTX8,6
1
+ SinaTools-0.1.28.data/data/sinatools/environment.yml,sha256=OzilhLjZbo_3nU93EQNUFX-6G5O3newiSWrwxvMH2Os,7231
2
+ sinatools/VERSION,sha256=NhKxpb_MVtfi01FRu6rOIYrldV__GIvBYcyyn5UnDBM,6
3
3
  sinatools/__init__.py,sha256=bEosTU1o-FSpyytS6iVP_82BXHF2yHnzpJxPLYRbeII,135
4
4
  sinatools/environment.yml,sha256=OzilhLjZbo_3nU93EQNUFX-6G5O3newiSWrwxvMH2Os,7231
5
5
  sinatools/install_env.py,sha256=EODeeE0ZzfM_rz33_JSIruX03Nc4ghyVOM5BHVhsZaQ,404
6
6
  sinatools/sinatools.py,sha256=vR5AaF0iel21LvsdcqwheoBz0SIj9K9I_Ub8M8oA98Y,20
7
- sinatools/CLI/DataDownload/download_files.py,sha256=VunXU_vAweKs7aS0FNM84N_2lhYT5T94Y8B3NWmGksg,2630
8
- sinatools/CLI/morphology/ALMA_multi_word.py,sha256=ZImJ1vtcpSHydI1BjJmK3KcMJbGBZX16kO4L6rxvBvA,2086
9
- sinatools/CLI/morphology/morph_analyzer.py,sha256=ieIM47QK9Nct3MtCS9uq3h2rZN5r4qNhsLmlVeE6wiE,3503
10
- sinatools/CLI/ner/corpus_entity_extractor.py,sha256=Da-DHFrqT6if7w6WnodB4TBE5ze3DJYjb2Mmju_Qd7g,4034
11
- sinatools/CLI/ner/entity_extractor.py,sha256=IiTioe0px0aJ1E58FrDVa2yNgM8Ie4uS2LZKK_z2Qn4,2942
7
+ sinatools/CLI/DataDownload/download_files.py,sha256=TzS0XjYDhusRBb2CRX1EjKjORa0wI6me_XoZ09dY4R8,2397
8
+ sinatools/CLI/morphology/ALMA_multi_word.py,sha256=rmpa72twwIJHme_kpQ1lu3_7y_Jorj70QTvOnQMJRuI,1274
9
+ sinatools/CLI/morphology/morph_analyzer.py,sha256=HPamEKos_JRYCJv_2q6c12N--da58_JXTno9haww5Ao,3497
10
+ sinatools/CLI/ner/corpus_entity_extractor.py,sha256=DdvigsDQzko5nJBjzUXlIDqoBMBTVzktjSo7JfEXTIA,4778
11
+ sinatools/CLI/ner/entity_extractor.py,sha256=G9j-t0WKm2CRORhqARJM-pI-KArQ2IXIvnBK_NHxlHs,2885
12
12
  sinatools/CLI/utils/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
13
13
  sinatools/CLI/utils/arStrip.py,sha256=NLyp8vOu2xv80tL9jiKRvyptmbkRZVg-wcAr-9YyvNY,3264
14
14
  sinatools/CLI/utils/corpus_tokenizer.py,sha256=nH0T4h6urr_0Qy6-wN3PquOtnwybj0REde5Ts_OE4U8,1650
15
- sinatools/CLI/utils/implication.py,sha256=nvoiI5UHHaJdd6MICql0pB_-h3L0icYwP1WgJi2h7p0,2854
16
- sinatools/CLI/utils/jaccard.py,sha256=NoKbWAq6dHDtQ56mAc1kdAnROm8NXEjZ1ecVZ7EYm6Y,4205
15
+ sinatools/CLI/utils/implication.py,sha256=AojpkCwUQJiQjxhyEUWKRHmBnIt1tVqr485cAF7Thq0,2857
16
+ sinatools/CLI/utils/jaccard.py,sha256=w56N_cNEFJ0A7WtunmY_xtms4srFagKBzrW_0YhH2DE,4216
17
17
  sinatools/CLI/utils/remove_latin.py,sha256=NOaTm2RHxt5IQrV98ySTmD8rTXTmcqSmfbPAwTyaXqU,848
18
18
  sinatools/CLI/utils/remove_punctuation.py,sha256=vJAZlEn7WGftZAFVFYnddkRrxdJ_rMmKB9vFZkY-jN4,1097
19
19
  sinatools/CLI/utils/sentence_tokenizer.py,sha256=Wli8eiDbWSd_Z8UKpu_JkaS8jImowa1vnRL0oYCSfqw,2823
20
20
  sinatools/CLI/utils/text_dublication_detector.py,sha256=dW70O5O20GxeUDDF6zVYn52wWLmJF-HBZgvqIeVL2rQ,1661
21
21
  sinatools/CLI/utils/text_transliteration.py,sha256=vz-3kxWf8pNYVCqNAtBAiA6u_efrS5NtWT-ofN1NX6I,2014
22
22
  sinatools/DataDownload/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
23
- sinatools/DataDownload/downloader.py,sha256=F-SV-0mbYMYFSNCx8FoAYXhn0X1j0dF37PTLU0nUBVg,6482
23
+ sinatools/DataDownload/downloader.py,sha256=6xH55WlDhgtImPRFQ0AaeDFJjL8OMNU29x61PL8mZ2w,6468
24
24
  sinatools/arabert/__init__.py,sha256=ely2PttjgSv7vKdzskuD1rtK_l_UOpmxJSz8isrveD0,16
25
25
  sinatools/arabert/preprocess.py,sha256=qI0FsuMTOzdRlYGCtLrjpXgikNElUZPv9bnjaKDZKJ4,33024
26
26
  sinatools/arabert/arabert/__init__.py,sha256=KbSAH-XqbRygn0y59m5-ZYOLXgpT1gSgE3F-qd4rKEc,627
@@ -75,14 +75,15 @@ sinatools/arabert/aragpt2/grover/train_tpu.py,sha256=qNgLI_j6-KYkTMJfVoFlh4NIKwe
75
75
  sinatools/arabert/aragpt2/grover/utils.py,sha256=V5wMUxK03r5g_pb7R3_uGLOPqQJfbIB0VaJ8ZDM4XAo,8473
76
76
  sinatools/morphology/ALMA_multi_word.py,sha256=hj_-8ojrYYHnfCGk8WKtJdUR8mauzQdma4WUm-okDps,1346
77
77
  sinatools/morphology/__init__.py,sha256=I4wVBh8BhyNl-CySVdiI_nUSn6gj1j-gmLKP300RpE0,1216
78
- sinatools/morphology/morph_analyzer.py,sha256=tA78gWg6iaE_G1c2xqxZoXZWNbvHBJLrTSxPyir5Xn8,6941
79
- sinatools/ner/__init__.py,sha256=gSs0x6veWJ8j3_iOs79tynBd_hJP0t44CGpJ0xzoiW4,1048
78
+ sinatools/morphology/morph_analyzer.py,sha256=3B-ewxFg_If83oYlk1bDdVS1clb-mgyAF4WgAMqcAVI,7009
79
+ sinatools/ner/__init__.py,sha256=CLPaqUcvPGAA4lU-6hjAqjNfKJ5WtwRfsma6QkYZHEk,1379
80
80
  sinatools/ner/data.py,sha256=lvOW86dXse8SC75Q0supQaE0rrRffoxNjIA0Qbv5WZY,4354
81
81
  sinatools/ner/data_format.py,sha256=7Yt0aOicOn9_YuuyCkM_IYi_rgjGYxR9bCuUaNGM73o,4341
82
82
  sinatools/ner/datasets.py,sha256=mG1iwqSm3lXCFHLqE-b4wNi176cpuzNBz8tKaBU6z6M,5059
83
- sinatools/ner/entity_extractor.py,sha256=yQnfayT03qAnQ4FBdBFhvl8M2pgIttrdWSWE9wgO2LI,1876
83
+ sinatools/ner/entity_extractor.py,sha256=O2epRwRFUUcQs3SnFIYHVBI4zVhr8hRcj0XJYeby4ts,3588
84
84
  sinatools/ner/helpers.py,sha256=dnOoDY5JMyOLTUWVIZLMt8mBn2IbWlVaqHhQyjs1voo,2343
85
85
  sinatools/ner/metrics.py,sha256=Irz6SsIvpOzGIA2lWxrEV86xnTnm0TzKm9SUVT4SXUU,2734
86
+ sinatools/ner/relation_extractor.py,sha256=a85xGX6V72fDpJk0GKmmtlWf8S8ezY-2pm5oGc9_ESY,9750
86
87
  sinatools/ner/transforms.py,sha256=vti3mDdi-IRP8i0aTQ37QqpPlP9hdMmJ6_bAMa0uL-s,4871
87
88
  sinatools/ner/data/__init__.py,sha256=W0C1ge_XxTfmdEGz0hkclz57aLI5VFS5t6BjByCfkFk,57
88
89
  sinatools/ner/data/datasets.py,sha256=lcdDDenFMEKIGYQmfww2dk_9WKWrJO9HtKptaAEsRmY,5064
@@ -96,27 +97,29 @@ sinatools/ner/trainers/BertNestedTrainer.py,sha256=Pb4O2WeBmTvV3hHMT6DXjxrTzgtuh
96
97
  sinatools/ner/trainers/BertTrainer.py,sha256=B_uVtUwfv_eFwMMPsKQvZgW_ZNLy6XEsX5ePR0s8d-k,6433
97
98
  sinatools/ner/trainers/__init__.py,sha256=UDok8pDDpYOpwRBBKVLKaOgSUlmqqb-zHZI1p0xPxzI,188
98
99
  sinatools/semantic_relatedness/__init__.py,sha256=S0xrmqtl72L02N56nbNMudPoebnYQgsaIyyX-587DsU,830
99
- sinatools/semantic_relatedness/compute_relatedness.py,sha256=JvI0cXgukKtuMpmAygMnlocCsPeAJ98LD1jZCP_6SyQ,1110
100
- sinatools/synonyms/__init__.py,sha256=BN1f99w4yqnuT9PrOBsbeOMepPJPi-Fh1hEMvxqfMdM,562
101
- sinatools/synonyms/synonyms_generator.py,sha256=FgAiuduSFyM6vJobWZKHg4KNWIQz8T6MGBPVIuVuw-8,6506
100
+ sinatools/semantic_relatedness/compute_relatedness.py,sha256=_9HFPs3nQBLklHFfkc9o3gEjEI6Bd34Ha4E1Kvv1RIg,2256
101
+ sinatools/synonyms/__init__.py,sha256=yMuphNZrm5XLOR2T0weOHcUysJm-JKHUmVLoLQO8390,548
102
+ sinatools/synonyms/synonyms_generator.py,sha256=jRd0D3_kn-jYBaZzqY-7oOy0SFjSJ-mjM7JhsySzX58,9037
102
103
  sinatools/utils/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
103
104
  sinatools/utils/charsets.py,sha256=rs82oZJqRqosZdTKXfFAJfJ5t4PxjMM_oAPsiWSWuwU,2817
104
105
  sinatools/utils/implication.py,sha256=MsbI6S1LNY-fCxGMxFTuaV639r3QijkkdcfH48rvY7A,27804
105
- sinatools/utils/jaccard.py,sha256=S7OgvaMqkN5HFgTZkKhMCNAuAnQ0LhRyXPN79jAzmKM,10113
106
- sinatools/utils/parser.py,sha256=CPPtCrsbxUqsjhY5C9wTOgkAs6iw0k_WvMUxLEPM1IU,6168
106
+ sinatools/utils/jaccard.py,sha256=kLIptPNB2VIqnemVve9auyOL1kXHIsCkKCEwxFM8yP4,10114
107
+ sinatools/utils/parser.py,sha256=qvHdln5R5CAv_0UOJWe0mcp8JCsGqgazoeIIkoALH88,6259
107
108
  sinatools/utils/readfile.py,sha256=xE4LEaCqXJIk9v37QUSSmWb-aY3UnCFUNb7uVdx3cpM,133
108
- sinatools/utils/text_dublication_detector.py,sha256=6yAOUtdw4TKiJkUPDDi3oK7CEoIuBDbliJ4PU7kapfo,4249
109
- sinatools/utils/text_transliteration.py,sha256=NQoXrxI-h0UXnvVtDA3skNJduxIy0IW26r46N4tDxGk,8766
110
- sinatools/utils/tokenizer.py,sha256=QHyrVqJA_On4rKxexiWR2ovq4pI1-u6iZkdhRbK9tew,6676
109
+ sinatools/utils/similarity.py,sha256=CgKOJpRAU5UaSjOg-sdZcACCNl9tuKDRwdFAKATCL_w,10762
110
+ sinatools/utils/text_dublication_detector.py,sha256=FeSkbfWGMQluz23H4CBHXION-walZPgjueX6AL8u_Q0,5660
111
+ sinatools/utils/text_transliteration.py,sha256=F3smhr2AEJtySE6wGQsiXXOslTvSDzLivTYu0btgc10,8769
112
+ sinatools/utils/tokenizer.py,sha256=nyk6lh5-p38wrU62hvh4wg7ni9ammkdqqIgcjbbBxxo,6965
111
113
  sinatools/utils/tokenizers_words.py,sha256=efNfOil9qDNVJ9yynk_8sqf65PsL-xtsHG7y2SZCkjQ,656
112
- sinatools/wsd/__init__.py,sha256=yV-SQSCzSrjbNkciMbDCqzGZ_EESchL7rlJk56uibVI,309
113
- sinatools/wsd/disambiguator.py,sha256=43Iq7NTZsiYWGFg-NUDrQuJKO1NT9QOnfBPB10IOJNs,19828
114
+ sinatools/utils/word_compare.py,sha256=rS2Z74sf7R-7MTXyrFj5miRi2TnSG9OdTDp_qQYuo2Y,28200
115
+ sinatools/wsd/__init__.py,sha256=mwmCUurOV42rsNRpIUP3luG0oEzeTfEx3oeDl93Oif8,306
116
+ sinatools/wsd/disambiguator.py,sha256=h-3idc5rPPbMDSE_QVJAsEVkDHwzYY3L2SEPNXIdOcc,20104
114
117
  sinatools/wsd/settings.py,sha256=6XflVTFKD8SVySX9Wj7zYQtV26WDTcQ2-uW8-gDNHKE,747
115
118
  sinatools/wsd/wsd.py,sha256=gHIBUFXegoY1z3rRnIlK6TduhYq2BTa_dHakOjOlT4k,4434
116
- SinaTools-0.1.26.dist-info/AUTHORS.rst,sha256=aTWeWlIdfLi56iLJfIUAwIrmqDcgxXKLji75_Fjzjyg,174
117
- SinaTools-0.1.26.dist-info/LICENSE,sha256=uwsKYG4TayHXNANWdpfMN2lVW4dimxQjA_7vuCVhD70,1088
118
- SinaTools-0.1.26.dist-info/METADATA,sha256=jqsARSXI1Z0hT9-ev6ewzZeNH_H350lv_c2oav_SKWg,953
119
- SinaTools-0.1.26.dist-info/WHEEL,sha256=6T3TYZE4YFi2HTS1BeZHNXAi8N52OZT4O-dJ6-ome_4,116
120
- SinaTools-0.1.26.dist-info/entry_points.txt,sha256=ZwZLolnWog2fjdDrfaHNHob8SE_YtMbD6ayzsOzItxs,1234
121
- SinaTools-0.1.26.dist-info/top_level.txt,sha256=8tNdPTeJKw3TQCaua8IJIx6N6WpgZZmVekf1OdBNJpE,10
122
- SinaTools-0.1.26.dist-info/RECORD,,
119
+ SinaTools-0.1.28.dist-info/AUTHORS.rst,sha256=aTWeWlIdfLi56iLJfIUAwIrmqDcgxXKLji75_Fjzjyg,174
120
+ SinaTools-0.1.28.dist-info/LICENSE,sha256=uwsKYG4TayHXNANWdpfMN2lVW4dimxQjA_7vuCVhD70,1088
121
+ SinaTools-0.1.28.dist-info/METADATA,sha256=oJ0szwQ8a_ykAsYn2uqU-pmhF4N4Sh0oIsv1JCYeX78,3267
122
+ SinaTools-0.1.28.dist-info/WHEEL,sha256=6T3TYZE4YFi2HTS1BeZHNXAi8N52OZT4O-dJ6-ome_4,116
123
+ SinaTools-0.1.28.dist-info/entry_points.txt,sha256=ZwZLolnWog2fjdDrfaHNHob8SE_YtMbD6ayzsOzItxs,1234
124
+ SinaTools-0.1.28.dist-info/top_level.txt,sha256=8tNdPTeJKw3TQCaua8IJIx6N6WpgZZmVekf1OdBNJpE,10
125
+ SinaTools-0.1.28.dist-info/RECORD,,
@@ -2,7 +2,7 @@
2
2
  About:
3
3
  ------
4
4
 
5
- The download_files is a command-line interface for downloading various NLP resources from pre-specified URLs. It is a part of the sinatools package and provides options to choose which files to download and to specify a download directory. The tool automatically handles file extraction for zip and tar.gz files.
5
+ The download_files command, allows users to select specific files and models to download and use it within SinaTools modules. Additionally, it automatically manages the extraction of compressed files, including zip and tar.gz formats.
6
6
 
7
7
  Usage:
8
8
  ------
@@ -18,7 +18,7 @@ Below is the usage information that can be generated by running download_files -
18
18
 
19
19
  Options:
20
20
  -f, --files FILES
21
- Names of the files to download. Available files are: ner, morph, wsd_model, wsd_tokenizer, glosses_dic, five_grams, four_grams, three_grams, two_grams, synonyms_level2, synonyms_level3.
21
+ Names of the files to download. Available files are: ner, morph, wsd, synonyms.
22
22
  If no file is specified, all files will be downloaded.
23
23
 
24
24
  Examples:
@@ -28,7 +28,6 @@ Examples:
28
28
 
29
29
  download_files -f morph ner
30
30
  This command will download only the `morph` and `ner` files to the default directory.
31
-
32
31
  """
33
32
 
34
33
  import argparse
@@ -56,14 +55,14 @@ def main():
56
55
  download_file(urls["ner"])
57
56
  download_file(urls["wsd_model"])
58
57
  download_file(urls["wsd_tokenizer"])
59
- download_file(urls["glosses_dic"])
58
+ download_file(urls["one_gram"])
60
59
  download_file(urls["five_grams"])
61
60
  download_file(urls["four_grams"])
62
61
  download_file(urls["three_grams"])
63
62
  download_file(urls["two_grams"])
64
63
  elif file == "synonyms":
65
- download_file(urls["synonyms_level2"])
66
- download_file(urls["synonyms_level3"])
64
+ download_file(urls["graph_l2"])
65
+ download_file(urls["graph_l3"])
67
66
  else:
68
67
  url = urls[file]
69
68
  download_file(url)
@@ -72,5 +71,3 @@ def main():
72
71
 
73
72
  if __name__ == '__main__':
74
73
  main()
75
-
76
- #download_files -f morph ner
@@ -1,37 +1,3 @@
1
- """
2
- About:
3
- ------
4
- The alma_multi_word tool performs multi-word morphological analysis using SinaTools' `ALMA_multi_word` utility. Given a multi-word Arabic text input, it returns a detailed analysis in JSON format.
5
-
6
- Usage:
7
- ------
8
- Below is the usage information that can be generated by running alma_multi_word --help.
9
-
10
- .. code-block:: none
11
-
12
- alma_multi_word --multi_word=MULTI_WORD_TEXT
13
- alma_multi_word --file
14
-
15
- Options:
16
- --------
17
-
18
- .. code-block:: none
19
-
20
- --multi_word MULTI_WORD_TEXT
21
- The multi-word Arabic text that needs to be analyzed.
22
- --file
23
- File containing the multi-word text to be analyzed
24
-
25
- Examples:
26
- ---------
27
-
28
- .. code-block:: none
29
-
30
- alma_multi_word --multi_word "Your multi-word text here"
31
- alma_multi_word --file "path/to/your/file.txt"
32
-
33
- """
34
-
35
1
  import argparse
36
2
  from sinatools.morphology.ALMA_multi_word import ALMA_multi_word
37
3
  import json
@@ -1,7 +1,7 @@
1
1
  """
2
2
  About:
3
3
  ------
4
- The morphology_analyzer command is designed to provide morphological analysis for Arabic text using the SinaTools morph_analyzer component. Users can specify the language and desired analysis task (lemmatization, part-of-speech tagging, or full morphological analysis), and flag.
4
+ The morphology_analyzer command is designed to provide morphological analysis for Arabic text using the SinaTools morph_analyzer API. Users can specify the language and desired analysis task (lemmatization, part-of-speech tagging, or full morphological analysis), and flag.
5
5
 
6
6
  Usage:
7
7
  ------
@@ -7,13 +7,26 @@ import argparse
7
7
  from sinatools.ner.entity_extractor import extract
8
8
 
9
9
  """
10
- This tool processes a csv file and returns named entites for each token within the text, based on the specified batch size. As follows:
10
+ The following command takes a CSV file as input. It splits a specific column into tokens and tags them using named entity recognition (NER). It retains all other columns as they are, and it also adds sentences and tokens. Additionally, it assigns an auto-incrementing ID, a sentence ID, and a global sentence ID to each token. As follows:
11
11
 
12
12
  Usage:
13
13
  ------
14
- Run the script with the following command:
15
-
16
- corpus_entity_extractor input.csv --text-columns "TextColumn1,TextColumn2" --additional-columns "Column3,Column4" --output-csv output.csv
14
+ Below is the usage information that can be generated by running corpus_entity_extractor --help.
15
+
16
+ corpus_entity_extractor --input_csv path/to/csv/file --text-columns "name of the column to be tokenized" --additional-columns "Column3,Column4" --output-csv path/to/csv/file
17
+
18
+ Options:
19
+ -------
20
+ --input_csv CSV_FILE_PATH
21
+ Path of csv file
22
+ --text-columns STR
23
+ Name of the text column that need to be tagged
24
+ -- additional-columns
25
+ name of columns that returned as they are
26
+ -- output-csv
27
+ path to csv file
28
+
29
+ corpus_entity_extractor --input_csv "input.csv" --text-columns "TextColumn1" --additional-columns "Column3,Column4" --output-csv "output.csv"
17
30
  """
18
31
 
19
32
  def jsons_to_list_of_lists(json_list):
@@ -1,7 +1,7 @@
1
1
  """
2
2
  About:
3
3
  ------
4
- This tool processes an input text and returns named entites for each token within the text, based on the specified batch size. As follows:
4
+ This command processes an input text and returns named entites for each token within the text. As follows:
5
5
 
6
6
  Usage:
7
7
  ------
@@ -10,7 +10,7 @@ Below is the usage information that can be generated by running entity_extractor
10
10
  .. code-block:: none
11
11
 
12
12
  entity_extractor --text=INPUT_TEXT
13
- entity_extractor --dir=INPUT_FILE --output_csv=OUTPUT_FILE_NAME
13
+ entity_extractor --dir=DIRECTORY_PATH --output_csv "path/to/csv/file"
14
14
 
15
15
  Options:
16
16
  --------
@@ -18,11 +18,11 @@ Options:
18
18
  .. code-block:: none
19
19
 
20
20
  --text INPUT_TEXT
21
- The text that needs to be analyzed for Named Entity Recognition.
22
- --file INPUT_FILE
23
- File containing the text to be analyzed for Named Entity Recognition.
24
- --output_csv OUTPUT_FILE_NAME
25
- A file containing the tokenized text and its Named Entity tags.
21
+ The text that needs to be analyzed for Named Entity Recognition.
22
+ --dir DIRECTORY_PATH
23
+ Directory containing the text files to be analyzed for Named Entity Recognition
24
+ --output_csv CSV_FILE
25
+ The path for output csv file
26
26
 
27
27
 
28
28
  Examples:
@@ -31,7 +31,7 @@ Examples:
31
31
  .. code-block:: none
32
32
 
33
33
  entity_extractor --text "Your text here"
34
- entity_extractor --dir "/path/to/your/directory" --output_csv "output.csv"
34
+ entity_extractor --dir "path/to/your/dir" --output_csv "path/to/your/file"
35
35
 
36
36
  """
37
37
 
@@ -39,7 +39,7 @@ Examples:
39
39
 
40
40
  """
41
41
  import argparse
42
- from sinatools.utils.implication import Implication
42
+ from sinatools.utils.word_compare import Implication
43
43
 
44
44
  def read_file(file_path):
45
45
  with open(file_path, 'r', encoding='utf-8') as file:
@@ -72,8 +72,8 @@ def main():
72
72
  # Instantiate the Implication class
73
73
  implication_obj = Implication(word1, word2)
74
74
 
75
- # For this example, assuming there is a method `get_result()` in the Implication class.
76
- result = implication_obj.get_result()
75
+ # For this example, assuming there is a method `get_verdict()` in the Implication class.
76
+ result = implication_obj.get_verdict()
77
77
  print(result)
78
78
 
79
79
  if __name__ == '__main__':
@@ -46,7 +46,7 @@ Examples:
46
46
  """
47
47
 
48
48
  import argparse
49
- from sinatools.utils.jaccard import jaccard
49
+ from sinatools.utils.similarity import get_jaccard
50
50
  from sinatools.utils.readfile import read_file
51
51
 
52
52
 
@@ -76,7 +76,7 @@ def main():
76
76
  print("Either --file1 and --file2 arguments or both --set1 and --set2 arguments must be provided.")
77
77
  return
78
78
 
79
- similarity = jaccard(args.delimiter, set1, set2, args.selection, args.ignoreAllDiacriticsButNotShadda, args.ignoreShaddaDiacritic)
79
+ similarity = get_jaccard(args.delimiter, set1, set2, args.selection, args.ignoreAllDiacriticsButNotShadda, args.ignoreShaddaDiacritic)
80
80
 
81
81
  print("Jaccard Result:", similarity)
82
82
 
@@ -15,8 +15,8 @@ urls = {
15
15
  'four_grams':'https://sina.birzeit.edu/four_grams.pickle',
16
16
  'three_grams':'https://sina.birzeit.edu/three_grams.pickle',
17
17
  'two_grams':'https://sina.birzeit.edu/two_grams.pickle',
18
- 'synonyms_level2':'https://sina.birzeit.edu/synonyms_level2.pkl',
19
- 'synonyms_level3':'https://sina.birzeit.edu/synonyms_level3.pkl'
18
+ 'synonyms_level2':'https://sina.birzeit.edu/graph_l2.pkl',
19
+ 'synonyms_level3':'https://sina.birzeit.edu/graph_l3.pkl'
20
20
  }
21
21
 
22
22
  def get_appdatadir():
sinatools/VERSION CHANGED
@@ -1 +1 @@
1
- 0.1.26
1
+ 0.1.28
@@ -24,27 +24,27 @@ def find_solution(token, language, flag):
24
24
 
25
25
  def analyze(text, language ='MSA', task ='full', flag="1"):
26
26
  """
27
- This method processes an input text and returns morphological analysis for each token within the text, based on the specified language, task, and flag. As follows:
28
- If:
29
- The task is lemmatization, the morphological solution includes only the lemma_id, lemma, token, and token frequency.
30
- The task is pos, the morphological solution includes only the part-of-speech, token, and token frequency.
31
- The task is root, the morphological solution includes only the root, token, and token frequency.
32
- The task is full, the morphological solution includes the lemma_id, lemma, part-of-speech, root, token, and token frequency.
27
+ This method processes an input text and returns morphological analysis for each token within the text, based on the specified language, task, and flag. You can try the demo online. See article for more details
28
+
29
+ * If the task is lemmatization, the morphological solution includes only the lemma_id, lemma, token, and token frequency.
30
+ * If the task is pos, the morphological solution includes only the part-of-speech, token, and token frequency.
31
+ * If the task is root, the morphological solution includes only the root, token, and token frequency.
32
+ * If the task is full, the morphological solution includes the lemma_id, lemma, part-of-speech, root, token, and token frequency.
33
33
 
34
- Args:
34
+ Parameters:
35
35
  text (:obj:`str`): The Arabic text to be morphologically analyzed.
36
- language (:obj:`str`): The type of the input text. Currently, only Modern Standard Arabic (MSA) is supported.
36
+ language (:obj:`str`): Currently, only Modern Standard Arabic (MSA) is supported.
37
37
  task (:obj:`str`): The task to filter the results by. Options are [lemmatization, pos, root, full]. The default task if not specified is `full`.
38
- flag (:obj:`str`): The flag to filter the returned results. If the flag is `1`, the solution with the highest frequency will be returned. If the flag is `*`, all solutions will be returned, ordered descendingly, with the highest frequency solution first. The default flag if not specified is `1`.
38
+ flag (:obj:`str`): The flag to filter the returned results. If the flag is `1`, the solution with the highest frequency will be returned. If the flag is `*`, all solutions will be returned, ordered descendingly, with the highest frequency solution first. The default flag if not specified is `1`.
39
39
 
40
40
  Returns:
41
41
  list (:obj:`list`): A list of JSON objects, where each JSON could be contains:
42
42
  token: The token from the original text.
43
- lemma: The lemma of the token.
44
- lemma_id: The id of the lemma.
45
- pos: The part-of-speech of the token.
46
- root: The root of the token.
47
- frequency: The frequency of the token.
43
+ lemma: The lemma of the token (Lemmas from the Qabas lexicon).
44
+ lemma_id: The id of the lemma (qabas lemma ids).
45
+ pos: The part-of-speech of the token (see Qabas POS tags).
46
+ root: The root of the token (qabas roots).
47
+ frequency: The frequency of the token (see section 3 in article).
48
48
 
49
49
  **Example:**
50
50
 
@@ -57,37 +57,36 @@ def analyze(text, language ='MSA', task ='full', flag="1"):
57
57
  #Example: task = full
58
58
  analyze('ذهب الولد الى المدرسة')
59
59
 
60
- [
61
- {
62
- "token": "ذهب",
63
- "lemma": "ذَهَبَ",
64
- "lemma_id": "202001617",
65
- "root": "ذ ه ب",
66
- "pos": "فعل ماضي",
67
- "frequency": "82202"
68
- },{
69
- "token": "الولد",
70
- "lemma": "وَلَدٌ",
71
- "lemma_id": "202003092",
72
- "root": "و ل د",
73
- "pos": "اسم",
74
- "frequency": "19066"
75
- },{
76
- "token": "إلى",
77
- "lemma": "إِلَى",
78
- "lemma_id": "202000856",
79
- "root": "إ ل ى",
80
- "pos": "حرف جر",
81
- "frequency": "7367507"
82
- },{
83
- "token": "المدرسة",
84
- "lemma": "مَدْرَسَةٌ",
85
- "lemma_id": "202002620",
86
- "root": "د ر س",
87
- "pos": "اسم",
88
- "frequency": "145285"
89
- }
90
- ]
60
+ [{
61
+ "token": "ذهب",
62
+ "lemma": "ذَهَبَ",
63
+ "lemma_id": "202001617",
64
+ "root": "ذ ه ب",
65
+ "pos": "فعل ماضي",
66
+ "frequency": "82202"
67
+ },{
68
+ "token": "الولد",
69
+ "lemma": "وَلَدٌ",
70
+ "lemma_id": "202003092",
71
+ "root": "و ل د",
72
+ "pos": "اسم",
73
+ "frequency": "19066"
74
+ },{
75
+ "token": "إلى",
76
+ "lemma": "إِلَى",
77
+ "lemma_id": "202000856",
78
+ "root": "إ ل ى",
79
+ "pos": "حرف جر",
80
+ "frequency": "7367507"
81
+ },{
82
+ "token": "المدرسة",
83
+ "lemma": "مَدْرَسَةٌ",
84
+ "lemma_id": "202002620",
85
+ "root": "د ر س",
86
+ "pos": "اسم",
87
+ "frequency": "145285"
88
+ }]
89
+
91
90
  """
92
91
 
93
92
  output_list = []
sinatools/ner/__init__.py CHANGED
@@ -7,6 +7,8 @@ import torch
7
7
  import pickle
8
8
  import json
9
9
  from argparse import Namespace
10
+ from transformers import pipeline
11
+ #from transformers import AutoModelForSequenceClassification
10
12
 
11
13
  tagger = None
12
14
  tag_vocab = None
@@ -35,4 +37,7 @@ if torch.cuda.is_available():
35
37
 
36
38
  train_config.trainer_config["kwargs"]["model"] = model
37
39
  tagger = load_object(train_config.trainer_config["fn"], train_config.trainer_config["kwargs"])
38
- tagger.load(os.path.join(model_path,"checkpoints"))
40
+ tagger.load(os.path.join(model_path,"checkpoints"))
41
+
42
+ pipe = pipeline("sentiment-analysis", model= os.path.join(path, "best_model"), return_all_scores =True, max_length=128, truncation=True)
43
+ #pipe = AutoModelForSequenceClassification.from_pretrained(os.path.join(path, "best_model"))
@@ -27,8 +27,49 @@ def convert_nested_to_flat(nested_tags):
27
27
 
28
28
  return flat_tags
29
29
 
30
- def extract(text, ner_method):
30
+ def extract(text, ner_method="nested"):
31
+ """
32
+ This method processes an input text and returns named entites for each token within the text. It support 21 class of entites. The method also support flat and nested NER. You can try the demo online. See article for details.
31
33
 
34
+ Args:
35
+ * text (:obj:`str`) – The Arabic text to be tagged.
36
+ * ner_method (:obj:`str`) – The NER method can produce either flat or nested output formats. The default method is nested.
37
+ nested method: If the method is nested, the output will include nested tags.
38
+ flat method: If the method is flat, the output will consist of only flat tags.
39
+ The choice between flat and nested methods determines the structure and detail of the named entity recognition output.
40
+
41
+ Returns:
42
+ A list of JSON objects, where each object could be contains:
43
+ token: The token from the original text.
44
+ NER tag: The label pairs for each segment.
45
+
46
+ **Example:**
47
+
48
+ .. highlight:: python
49
+ .. code-block:: python
50
+
51
+ from sinatools.ner.entity_extractor import extract
52
+ #Example of nested ner. Notice that the last word in this sentense contains nested tags.
53
+ extract('ذهب محمد الى جامعة بيرزيت')
54
+ #the output
55
+ [{
56
+ "token":"ذهب",
57
+ "tags":"O"
58
+ },{
59
+ "token":"محمد",
60
+ "tags":"B-PERS"
61
+ },{
62
+ "token":"إلى",
63
+ "tags":"O"
64
+ },{
65
+ "token":"جامعة",
66
+ "tags":"B-ORG"
67
+ },{
68
+ "token":"بيرزيت",
69
+ "tags":"B-GPE I-ORG"
70
+ }]
71
+ """
72
+
32
73
  dataset, token_vocab = text2segments(text)
33
74
 
34
75
  vocabs = namedtuple("Vocab", ["tags", "tokens"])