fuzzywuzzy 0.15.1-1 source package in Ubuntu
Changelog
fuzzywuzzy (0.15.1-1) unstable; urgency=medium * New upstream release. * Update Standards-Version (no changes). -- Edward Betts <edward@4angle.com> Thu, 05 Oct 2017 09:04:41 +0100
Upload details
- Uploaded by:
- Debian Python Modules Team
- Uploaded to:
- Sid
- Original maintainer:
- Debian Python Modules Team
- Architectures:
- all
- Section:
- misc
- Urgency:
- Medium Urgency
See full publishing history Publishing
Series | Published | Component | Section |
---|
Downloads
File | Size | SHA-256 Checksum |
---|---|---|
fuzzywuzzy_0.15.1-1.dsc | 2.3 KiB | baa458d9eaa63b312173cf15ac006fe77a34b9dcdfc98f66f01608af776655e9 |
fuzzywuzzy_0.15.1.orig.tar.gz | 26.1 KiB | 3ed1a125d682208aa327516eb56fc69cff76215230efa0792afd1f3cb6975214 |
fuzzywuzzy_0.15.1-1.debian.tar.xz | 3.3 KiB | 6378bfbbd53e194107876966e1a8a09969aedace11e466ecae5481733d96d8dd |
Available diffs
- diff from 0.15.0-2 to 0.15.1-1 (10.5 KiB)
No changes file available.
Binary packages built by this source
- python-fuzzywuzzy: Python module for fuzzy string matching
Various methods for fuzzy matching of strings in Python, including:
.
- String similarity: Gives a measure of string similarity between 0 and 100.
- Partial string similarity: Inconsistent substrings are a common problem
when string matching. To get around it, use a "best partial" heuristic
when two strings are of noticeably different lengths.
- Token sort: This approach involves tokenizing the string in question,
sorting the tokens alphabetically, and then joining them back into a
string.
- Token set: A slightly more flexible approach. Tokenize both strings, but
instead of immediately sorting and comparing, split the tokens into two
groups: intersection and remainder.
- python3-fuzzywuzzy: Python 3 module for fuzzy string matching
Various methods for fuzzy matching of strings in Python, including:
.
- String similarity: Gives a measure of string similarity between 0 and 100.
- Partial string similarity: Inconsistent substrings are a common problem
when string matching. To get around it, use a "best partial" heuristic
when two strings are of noticeably different lengths.
- Token sort: This approach involves tokenizing the string in question,
sorting the tokens alphabetically, and then joining them back into a
string.
- Token set: A slightly more flexible approach. Tokenize both strings, but
instead of immediately sorting and comparing, split the tokens into two
groups: intersection and remainder.
.
This package contains fuzzywuzzy for Python 3.