fixed incorrect usage of the scale function (apparently)

also slight README change
This commit is contained in:
jan6 2021-07-05 23:37:53 +03:00
parent 3c17437117
commit 0f8ef4da2d
2 changed files with 6 additions and 3 deletions

View File

@ -21,9 +21,12 @@ but you can use any newline-separated list of strings
some other word lists to save you time:
* [dwyl/english-words](https://github.com/dwyl/english-words/)
* [positive-words.txt](https://gist.github.com/mkulakowski2/4289437) (it needs removal of the comment block on top)
* [pos/neg sentiment lexicon](https://www.cs.uic.edu/~liub/FBS/sentiment-analysis.html#lexicon) (needs comment block removal and .rar unpacking)
* [List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words)
* [lorenbrichter/Words](https://github.com/lorenbrichter/Words)
* [jlawler's word list from '99](http://www-personal.umich.edu/~jlawler/wordlist.html)
* [scottfrazer's GRE list](https://github.com/scottfrazer/gre/blob/master/words.txt) (needs trivial processing to print first column)
* [linuxwords dictionary by ola](https://users.cs.duke.edu/~ola/ap/linuxwords)
* [Debian's wordlist package](https://packages.debian.org/stable/wordlist)
* [Arch's words package](https://archlinux.org/packages/community/any/words/)

View File

@ -26,8 +26,8 @@ def wordmap(wordlist_file: str, input: str, digest_size: int = 4):
words_max = len(words)
digest_max = int.from_bytes(b'\xff' * digest_size, "big")
input_hash = hashfun(input, digest_size=digest_size)
# words_max-1 is so that it can round up to the max
input_hash_mapped = scale(input_hash, (0, digest_max), (0, words_max-1))
# digest_max+1 is so that it can round up to the max
input_hash_mapped = scale(input_hash, (0, digest_max+1), (0, words_max))
# alternatively, change round() to math.floor()
input_hash_mapped = round(input_hash_mapped)
print(words[input_hash_mapped], end="")