sonatoki 0.5.1__py3-none-any.whl → 0.5.3__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: sonatoki
3
- Version: 0.5.1
3
+ Version: 0.5.3
4
4
  Summary: ilo li moku e toki li pana e sona ni: ni li toki ala toki pona?
5
5
  Author-Email: "jan Kekan San (@gregdan3)" <gregory.danielson3@gmail.com>
6
6
  License: AGPL-3.0-or-later
@@ -26,9 +26,9 @@ This library, "Language Knowledge," helps you identify whether a message is in T
26
26
 
27
27
  I wrote this library with a variety of scraps and lessons learned from a prior project, [ilo pi toki pona taso, "toki-pona-only tool"](https://github.com/gregdan3/ilo-pi-toki-pona-taso). That tool now uses this library to great success!
28
28
 
29
- If you've ever worked on a similar project, you know the question "is this message in [language]" is not a consistent one- the environment, time, preferences of the speaker, and much more, can all alter whether a given message is "in" any specific language. This complexity applies to Toki Pona too.
29
+ If you've ever worked on a similar project, you know the question "is this message in [language]" is not a consistent one- the environment, topic, preferences of the speaker, and much more, can all alter whether a given message is "in" any specific language. This complexity applies to Toki Pona too.
30
30
 
31
- So, this project "solves" that complex problem by offering an opinionated tokenizer and a configurable parser, allowing you to tune its output to your preferences and goals. [Even silly ones.](https://sona.pona.la/wiki/isipin_epiku).
31
+ So, this project "solves" that complex problem by offering an opinionated tokenizer and a configurable parser, allowing you to tune its output to your preferences and goals. [Even silly ones.](https://sona.pona.la/wiki/isipin_epiku)
32
32
 
33
33
  ## Quick Start
34
34
 
@@ -61,12 +61,12 @@ Or if you'd prefer to configure on your own:
61
61
  from copy import deepcopy
62
62
  from sonatoki.ilo import Ilo
63
63
  from sonatoki.Configs import BaseConfig
64
- from sonatoki.Filters import NimiLinkuCore, Phonotactic, ProperName
64
+ from sonatoki.Filters import NimiLinkuCore, NimiLinkuCommon, Phonotactic, ProperName, Or
65
65
  from sonatoki.Scorers import SoftPassFail
66
66
 
67
67
  def main():
68
68
  config = deepcopy(BaseConfig)
69
- config["scoring_filters"].extend([NimiLinkuCore, Phonotactic, ProperName])
69
+ config["scoring_filters"].extend([Or(NimiLinkuCore, NimiLinkuCommon), Phonotactic, ProperName])
70
70
  config["scorer"] = SoftPassFail
71
71
 
72
72
  ilo = Ilo(**config)
@@ -78,7 +78,7 @@ if __name__ == "__main__":
78
78
  main()
79
79
  ```
80
80
 
81
- `Ilo` is highly configurable by necessity, so I recommend looking through the premade configs in `Configs` as well as the individual `Preprocessors`, `Filters`, and `Scorers`. The `Cleaners` module only contains one cleaner, which I recommend always using. Similarly, the `Tokenizers` module contains several other word tokenizers, but their performance will be worse than the dedicated Toki Pona tokenizer `WordTokenizerTok`.
81
+ `Ilo` is highly configurable by necessity, so I recommend looking through the premade configs in `Configs` as well as the individual `Preprocessors`, `Filters`, and `Scorers`. In `Cleaners`, all you need is `ConsecutiveDuplicates`. In `Tokenizers`, the preferred tokenizers `WordTokenizer` and `SentTokenizer` are already the default in `Ilo`.
82
82
 
83
83
  ## Development
84
84
 
@@ -92,25 +92,26 @@ if __name__ == "__main__":
92
92
 
93
93
  The intent is to show our methodology to the Unicode Consortium, particularly to the Script Encoding Working Group (previously the Script Ad Hoc Group). As far as we're aware, zero members of the committee know Toki Pona, which unfortunately means we fall back on English.
94
94
 
95
- After our proposal has been examined and a result given by the committee, I will translate this file and library into Toki Pona, with a note left behind for those who do not understand it.
95
+ I originally intended to translate this file and library into Toki Pona once Unicode had reviewed our proposal, but this library has picked up some interest outside of the Toki Pona community, so this library and README will remain accessible to them.
96
96
 
97
97
  ### What's the deal with the tokenizers?
98
98
 
99
- The Toki Pona tokenizer `sonatoki.Tokenizers.WordTokenizer` has the goal of tokenizing statements such that every token either represents a word candidate ("toki", "mumumu") or a complete non-candidate ("..!", "123").
100
- This design is highly undesirable for NLTK's English tokenizer because English words can have "punctuation" characters in them.
101
- But Toki Pona doesn't have any mid-word symbols when rendered in the Latin alphabet or in [Private Use Area Unicode characters](https://www.kreativekorp.com/ucsur/), so a more aggressive tokenizer is highly desirable.
99
+ The Toki Pona tokenizer `sonatoki.Tokenizers.WordTokenizer` attempts to tokenize statements such that every token either represents a word candidate ("toki", "mumumu") or a complete non-candidate ("..!", "123").
100
+ This design is highly undesirable for NLTK's English tokenizer because English words can have "punctuation" characters in them such as `'` or `-`.
101
+ Toki Pona doesn't have any mid-word symbols when rendered in the Latin alphabet or in [Private Use Area Unicode characters](https://www.kreativekorp.com/ucsur/), so a more aggressive tokenizer is highly desirable.
102
102
 
103
103
  The goal of splitting into word candidates and non-candidates is important, because any [encoding of Toki Pona's logographic script](https://www.kreativekorp.com/ucsur/charts/sitelen.html) will require each character be split into its own token, where the default behavior would be to leave consecutive non-punctuation together.
104
104
 
105
105
  ### Aren't there a lot of false positives?
106
106
 
107
- Yes, depending on the filter you choose and how you apply it.
108
- It's up to you to use this tool responsibly on input you've done your best to clean, such as by using stronger filters before weaker ones.
109
- For now though, here's a list of relevant false positives:
107
+ For any individual filter, yes. Here are some examples:
110
108
 
111
- - `ProperName` will errantly match text in languages without a capital/lowercase distinction, artificially increasing scores.
112
- - `Alphabetic` will match a _lot_ of undesirable text- it essentially allows 14 letters of the English alphabet. For example, "I'm well" would match as _three_ words: "i", "m", "well".
113
- - `NimiPu` and other sets containing `a`, `mute`, `open`, and others will unavoidably match those words in English text too.
109
+ - `ProperName` will errantly match text in languages without a capital/lowercase distinction
110
+ - `Alphabetic` matches words so long as they are only made of letters in Toki Pona's alphabet, which is 14 letters of the Latin alphabet.
111
+ - `Syllabic` and `Phonetic`, despite imposing more structure than `Alphabetic`, will match a surprising amount of English words. For example, every word in "an awesome joke!" matches.
112
+ - `NimiPu` and `NimiLinkuCore` will match `a`, `mute`, `open` regardless of the surrounding language.
113
+
114
+ This is point of `Ilo` and the `Scorers`: None of these filters would _individually_ be able to correctly identify a Toki Pona statement, but all of them working together with some tuning are able to achieve a surprisingly high accuracy.
114
115
 
115
116
  ### Don't some of the cleaners/filters conflict?
116
117
 
@@ -1,20 +1,20 @@
1
- sonatoki-0.5.1.dist-info/METADATA,sha256=gj5B_q10R5l-w0jEuzFY2035qzp9tpmBQ-sZ0q73zXE,6370
2
- sonatoki-0.5.1.dist-info/WHEEL,sha256=SOP-4bEE0jbVaCHQGVvF08uWxk5rcSsfEybvoQVHlD8,90
3
- sonatoki-0.5.1.dist-info/licenses/LICENSE,sha256=DZak_2itbUtvHzD3E7GNUYSRK6jdOJ-GqncQ2weavLA,34523
1
+ sonatoki-0.5.3.dist-info/METADATA,sha256=mC-i9FszUcyFA8peFVjRvj5QxCoVFjfHf60UWZNxquA,6517
2
+ sonatoki-0.5.3.dist-info/WHEEL,sha256=rSwsxJWe3vzyR5HCwjWXQruDgschpei4h_giTm0dJVE,90
3
+ sonatoki-0.5.3.dist-info/licenses/LICENSE,sha256=DZak_2itbUtvHzD3E7GNUYSRK6jdOJ-GqncQ2weavLA,34523
4
4
  sonatoki/Cleaners.py,sha256=x2dT3MpDUfbrHA0EP2D3n1sTiKFFi5jw9ha-1dX973o,1958
5
- sonatoki/Configs.py,sha256=HHaSAA7hus7aY6Xy-3fNlbzMwk3wJO0HrjTssg8P78M,4291
5
+ sonatoki/Configs.py,sha256=yprG3LEMyy6KKJWEEeJ7nEIC3-qtqA7p4CTHYv4a4vU,5469
6
6
  sonatoki/Filters.py,sha256=nVSmw5M4sEYA_8KI1fI53rMHkd9KO6yWbKfdxxExxN8,11700
7
- sonatoki/Preprocessors.py,sha256=zuu-6SLqFgk88vfSnYlyZjZrzoZQ56U_1SFXoxThQDQ,5628
7
+ sonatoki/Preprocessors.py,sha256=nN6xL6mvVAnWZjSNW8CaeLm8x4kK3dCoB-1WYqi0ANU,5763
8
8
  sonatoki/Scorers.py,sha256=LRQLgXKTU2VqhkMHFPVxyVt83DXf85_zrpDGk4ThU24,3811
9
9
  sonatoki/Tokenizers.py,sha256=qFaA1-v-wjKMihtEJMeZpi3m4cSkJQgWhGhL-w0VgPE,4236
10
10
  sonatoki/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
11
11
  sonatoki/__main__.py,sha256=6n4kUF80APl6a0jV46h_ncHNuQbrLpZ_nAmiNAakiag,5673
12
12
  sonatoki/alphabetic.txt,sha256=duyqAKilD2vLIr75RShCIAnktNJcGeEoQIk18V6czmg,11702
13
- sonatoki/constants.py,sha256=a3OjhtH2Jp6RDot1NE-PrQfm2VzfM850b-qipFLnjS4,18868
13
+ sonatoki/constants.py,sha256=BYML7p9oUELgUDO0xdgmP74idcwjiFSw_NfuDLpsp8k,18952
14
14
  sonatoki/ilo.py,sha256=PWZa202Q4h7IjnLxmfgT93iAPJL7dqJbA97L9kQDPiA,5658
15
- sonatoki/linku.json,sha256=FLsaESG01rQ88OU8HvwOUl_P9qtGykJ1X-1xoMVDkKA,295077
15
+ sonatoki/linku.json,sha256=d72Dvht-a4gBmdqLLI8mElvo83zSpbxDmxJj05hOudM,295413
16
16
  sonatoki/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
17
- sonatoki/sandbox.json,sha256=3BpCEjw-kB4z7DJAJ2UrE1YuFIe3knat8qi1iYuAIq4,83555
17
+ sonatoki/sandbox.json,sha256=44csrQDaVtV-n8OyewabX1J9MmUFCsPct5C8E5Xuc58,140197
18
18
  sonatoki/syllabic.txt,sha256=HnqY4TrZ3tPcHah3TsvG9F9gjMrnAGdJ8hHJNHyyUPc,1712
19
19
  sonatoki/utils.py,sha256=sT5xLMEj0aLpy8GP92HKblJU1Wt1m8NUlMgCFWB32xQ,2265
20
- sonatoki-0.5.1.dist-info/RECORD,,
20
+ sonatoki-0.5.3.dist-info/RECORD,,
@@ -1,4 +1,4 @@
1
1
  Wheel-Version: 1.0
2
- Generator: pdm-backend (2.3.1)
2
+ Generator: pdm-backend (2.3.3)
3
3
  Root-Is-Purelib: true
4
4
  Tag: py3-none-any