Gensim是一个Python库,用于主题建模,文档索引和大型语料库的相似性检索
Gensim是一个Python库,用于主题建模,文档索引和大型语料库的相似性检索。 目标受众是自然语言处理(NLP)和信息检索(IR)社区。
⚠️
3.8.x will be the last gensim version to support Py2.7. Starting with 4.0.0, gensim will only support Py3.5 and above
3.8.3, 2020-05-03
This is primarily a bugfix release to bring back Py2.7 compatibility to gensim 3.8.
🔴
Bug fixes
- Bring back Py27 support (PR #2812, @mpenkov)
- Fix wrong version reported by setup.py (Issue #2796)
- Fix missing C extensions (Issues #2794 and #2802)
👍
Improvements
- Wheels for Python 3.8 (@menshikh-iv)
- Prepare for removal of deprecated
lxml.etree.cElementTree
(PR #2777, @tirkarthi)
📚
Tutorial and doc improvements
- Update test instructions in README (PR #2814, @piskvorky)
⚠️
Deprecations (will be removed in the next major release)
-
Remove
gensim.models.FastText.load_fasttext_format
: use load_facebook_vectors to load embeddings only (faster, less CPU/memory usage, does not support training continuation) and load_facebook_model to load full model (slower, more CPU/memory intensive, supports training continuation)gensim.models.wrappers.fasttext
(obsoleted by the new nativegensim.models.fasttext
implementation)gensim.examples
gensim.nosy
gensim.scripts.word2vec_standalone
gensim.scripts.make_wiki_lemma
gensim.scripts.make_wiki_online
gensim.scripts.make_wiki_online_lemma
gensim.scripts.make_wiki_online_nodebug
gensim.scripts.make_wiki
(all of these obsoleted by the new nativegensim.scripts.segment_wiki
implementation)- "deprecated" functions and attributes
-
Move
gensim.scripts.make_wikicorpus
➡ gensim.scripts.make_wiki.py
gensim.summarization
➡ gensim.models.summarization
gensim.topic_coherence
➡ gensim.models._coherence
gensim.utils
➡ gensim.utils.utils
(old imports will continue to work)gensim.parsing.*
➡ gensim.utils.text_utils
Assets
2
mpenkov released this
3.8.2, 2020-04-10
🔴
Bug fixes
- Pin
smart_open
version for compatibility with Py2.7
⚠️
Deprecations (will be removed in the next major release)
-
Remove
gensim.models.FastText.load_fasttext_format
: use load_facebook_vectors to load embeddings only (faster, less CPU/memory usage, does not support training continuation) and load_facebook_model to load full model (slower, more CPU/memory intensive, supports training continuation)gensim.models.wrappers.fasttext
(obsoleted by the new nativegensim.models.fasttext
implementation)gensim.examples
gensim.nosy
gensim.scripts.word2vec_standalone
gensim.scripts.make_wiki_lemma
gensim.scripts.make_wiki_online
gensim.scripts.make_wiki_online_lemma
gensim.scripts.make_wiki_online_nodebug
gensim.scripts.make_wiki
(all of these obsoleted by the new nativegensim.scripts.segment_wiki
implementation)- "deprecated" functions and attributes
-
Move
gensim.scripts.make_wikicorpus
➡ gensim.scripts.make_wiki.py
gensim.summarization
➡ gensim.models.summarization
gensim.topic_coherence
➡ gensim.models._coherence
gensim.utils
➡ gensim.utils.utils
(old imports will continue to work)gensim.parsing.*
➡ gensim.utils.text_utils
Assets
2
3.8.1, 2019-09-23
🔴
Bug fixes
- Fix usage of base_dir instead of BASE_DIR in _load_info in downloader. (movb, #2605)
- Update the version of smart_open in the setup.py file (AMR-KELEG, #2582)
- Properly handle unicode_errors arg parameter when loading a vocab file (wmtzk, #2570)
- Catch loading older TfidfModels without smartirs (bnomis, #2559)
- Fix bug where a module import set up logging, pin doctools for Py2 (piskvorky, #2552)
📚
Tutorial and doc improvements
👍
Improvements
⚠️
Deprecations (will be removed in the next major release)
-
Remove
gensim.models.FastText.load_fasttext_format
: use load_facebook_vectors to load embeddings only (faster, less CPU/memory usage, does not support training continuation) and load_facebook_model to load full model (slower, more CPU/memory intensive, supports training continuation)gensim.models.wrappers.fasttext
(obsoleted by the new nativegensim.models.fasttext
implementation)gensim.examples
gensim.nosy
gensim.scripts.word2vec_standalone
gensim.scripts.make_wiki_lemma
gensim.scripts.make_wiki_online
gensim.scripts.make_wiki_online_lemma
gensim.scripts.make_wiki_online_nodebug
gensim.scripts.make_wiki
(all of these obsoleted by the new nativegensim.scripts.segment_wiki
implementation)- "deprecated" functions and attributes
-
Move
gensim.scripts.make_wikicorpus
➡gensim.scripts.make_wiki.py
gensim.summarization
➡gensim.models.summarization
gensim.topic_coherence
➡gensim.models._coherence
gensim.utils
➡gensim.utils.utils
(old imports will continue to work)gensim.parsing.*
➡gensim.utils.text_utils
Assets
2
3.8.0, 2019-07-08
⚠️
3.8.x will be the last Gensim version to support Py2.7. Starting with 4.0.0, Gensim will only support Py3.5 and above
🌟
New Features
- Enable online training of Poincare models (koiizukag, #2505)
- Make BM25 more scalable by adding support for generator inputs (saraswatmks, #2479)
- Allow the Gensim dataset / pre-trained model downloader
gensim.downloader
to run offline, by introducing a local file cache (mpenkov, #2545) - Make the
gensim.downloader
target directory configurable (mpenkov, #2456) - Support fast kNN document similarity search using NMSLIB (masa3141, #2417)
🔴
Bug fixes
- Fix
smart_open
deprecation warning globally (itayB, #2530) - Fix AppVeyor issues with Windows and Py2 (mpenkov, #2546)
- Fix
topn=0
versustopn=None
bug inmost_similar
, accepttopn
of any integer type (Witiko, #2497) - Fix Python version check (charsyam, #2547)
- Fix typo in FastText documentation (Guitaricet, #2518)
- Fix "Market Matrix" to "Matrix Market" typo. (Shooter23, #2513)
- Fix auto-generated hyperlinks in
CHANGELOG.md
(mpenkov, #2482)
📚
Tutorial and doc improvements
- Generate documentation for the
gensim.similarities.termsim
module (Witiko, #2485) - Simplify the
Support
section in README (piskvorky, #2542)
👍
Improvements
⚠️
Deprecations (will be removed in the next major release)
-
Remove
gensim.models.FastText.load_fasttext_format
: use load_facebook_vectors to load embeddings only (faster, less CPU/memory usage, does not support training continuation) and load_facebook_model to load full model (slower, more CPU/memory intensive, supports training continuation)gensim.models.wrappers.fasttext
(obsoleted by the new nativegensim.models.fasttext
implementation)gensim.examples
gensim.nosy
gensim.scripts.word2vec_standalone
gensim.scripts.make_wiki_lemma
gensim.scripts.make_wiki_online
gensim.scripts.make_wiki_online_lemma
gensim.scripts.make_wiki_online_nodebug
gensim.scripts.make_wiki
(all of these obsoleted by the new nativegensim.scripts.segment_wiki
implementation)- "deprecated" functions and attributes
-
Move
gensim.scripts.make_wikicorpus
➡gensim.scripts.make_wiki.py
gensim.summarization
➡gensim.models.summarization
gensim.topic_coherence
➡gensim.models._coherence
gensim.utils
➡gensim.utils.utils
(old imports will continue to work)gensim.parsing.*
➡gensim.utils.text_utils
Assets
2
3.7.3, 2019-05-06
🔴
Bug fixes
- Fix fasttext model loading from gzip files (mpenkov, #2476)
- Fix misleading
Doc2Vec.docvecs
comment (gojomo, #2472) - NMF bugfix (mpenkov, #2466)
- Fix
WordEmbeddingsKeyedVectors.most_similar
(Witiko, #2461) - Fix LdaSequence model by updating to num_documents (Bharat123rox, #2410)
- Make termsim matrix positive definite even with negative similarities (Witiko, #2397)
- Fix the off-by-one bug in the TFIDF model. (AMR-KELEG, #2392)
- Update legacy model loading (mpenkov, #2454, #2457)
- Make
matutils.unitvec
always return float norm when requested (Witiko, #2419)
📚
Tutorial and doc improvements
👍
Improvements
- Adding type check for corpus_file argument (saraswatmks, #2469)
- Clean up FastText Cython code, fix division by zero (mpenkov, #2382)
⚠️
Deprecations (will be removed in the next major release)
-
Remove
gensim.models.FastText.load_fasttext_format
: use load_facebook_vectors to load embeddings only (faster, less CPU/memory usage, does not support training continuation) and load_facebook_model to load full model (slower, more CPU/memory intensive, supports training continuation)gensim.models.wrappers.fasttext
(obsoleted by the new nativegensim.models.fasttext
implementation)gensim.examples
gensim.nosy
gensim.scripts.word2vec_standalone
gensim.scripts.make_wiki_lemma
gensim.scripts.make_wiki_online
gensim.scripts.make_wiki_online_lemma
gensim.scripts.make_wiki_online_nodebug
gensim.scripts.make_wiki
(all of these obsoleted by the new nativegensim.scripts.segment_wiki
implementation)- "deprecated" functions and attributes
-
Move
gensim.scripts.make_wikicorpus
➡gensim.scripts.make_wiki.py
gensim.summarization
➡gensim.models.summarization
gensim.topic_coherence
➡gensim.models._coherence
gensim.utils
➡gensim.utils.utils
(old imports will continue to work)gensim.parsing.*
➡gensim.utils.text_utils
Assets
2
3.7.2, 2019-04-06
🌟
New Features
-
gensim.models.fasttext.load_facebook_model
function: load full model (slower, more CPU/memory intensive, supports training continuation)>>> from gensim.test.utils import datapath >>> >>> cap_path = datapath("crime-and-punishment.bin") >>> fb_model = load_facebook_model(cap_path) >>> >>> 'landlord' in fb_model.wv.vocab # Word is out of vocabulary False >>> oov_term = fb_model.wv['landlord'] >>> >>> 'landlady' in fb_model.wv.vocab # Word is in the vocabulary True >>> iv_term = fb_model.wv['landlady'] >>> >>> new_sent = [['lord', 'of', 'the', 'rings'], ['lord', 'of', 'the', 'flies']] >>> fb_model.build_vocab(new_sent, update=True) >>> fb_model.train(sentences=new_sent, total_examples=len(new_sent), epochs=5)
-
gensim.models.fasttext.load_facebook_vectors
function: load embeddings only (faster, less CPU/memory usage, does not support training continuation)>>> fbkv = load_facebook_vectors(cap_path) >>> >>> 'landlord' in fbkv.vocab # Word is out of vocabulary False >>> oov_vector = fbkv['landlord'] >>> >>> 'landlady' in fbkv.vocab # Word is in the vocabulary True >>> iv_vector = fbkv['landlady']
🔴
Bug fixes
- Fix unicode error when loading FastText vocabulary (@mpenkov, #2390)
- Avoid division by zero in fasttext_inner.pyx (@mpenkov, #2404)
- Avoid incorrect filename inference when loading model (@mpenkov, #2408)
- Handle invalid unicode when loading native FastText models (@mpenkov, #2411)
- Avoid divide by zero when calculating vectors for terms with no ngrams (@mpenkov, #2411)
📚
Tutorial and doc improvements
- Add link to bindr (rogueleaderr, #2387)
👍
Improvements
⚠️
Changes in FastText behavior
Out-of-vocab word handling
To achieve consistency with the reference implementation from Facebook,
a FastText
model will now always report any word, out-of-vocabulary or
not, as being in the model, and always return some vector for any word
looked-up. Specifically:
'any_word' in ft_model
will always returnTrue
. Previously, it
returnedTrue
only if the full word was in the vocabulary. (To test if a
full word is in the known vocabulary, you can consult thewv.vocab
property:'any_word' in ft_model.wv.vocab
will returnFalse
if the full
word wasn't learned during model training.)ft_model['any_word']
will always return a vector. Previously, it
raisedKeyError
for OOV words when the model had no vectors
for any ngrams of the word.- If no ngrams from the term are present in the model,
or when no ngrams could be extracted from the term, a vector pointing
to the origin will be returned. Previously, a vector of NaN (not a number)
was returned as a consequence of a divide-by-zero problem. - Models may use more more memory, or take longer for word-vector
lookup, especially after training on smaller corpuses where the previous
non-compliant behavior discarded some ngrams from consideration.
Loading models in Facebook .bin format
The gensim.models.FastText.load_fasttext_format
function (deprecated) now loads the entire model contained in the .bin file, including the shallow neural network that enables training continuation.
Loading this NN requires more CPU and RAM than previously required.
Since this function is deprecated, consider using one of its alternatives (see below).
Furthermore, you must now pass the full path to the file to load, including the file extension.
Previously, if you specified a model path that ends with anything other than .bin, the code automatically appended .bin to the path before loading the model.
This behavior was confusing, so we removed it.
⚠️
Deprecations (will be removed in the next major release)
-
Remove
gensim.models.FastText.load_fasttext_format
: use load_facebook_vectors to load embeddings only (faster, less CPU/memory usage, does not support training continuation) and load_facebook_model to load full model (slower, more CPU/memory intensive, supports training continuation)gensim.models.wrappers.fasttext
(obsoleted by the new nativegensim.models.fasttext
implementation)gensim.examples
gensim.nosy
gensim.scripts.word2vec_standalone
gensim.scripts.make_wiki_lemma
gensim.scripts.make_wiki_online
gensim.scripts.make_wiki_online_lemma
gensim.scripts.make_wiki_online_nodebug
gensim.scripts.make_wiki
(all of these obsoleted by the new nativegensim.scripts.segment_wiki
implementation)- "deprecated" functions and attributes
-
Move
gensim.scripts.make_wikicorpus
➡gensim.scripts.make_wiki.py
gensim.summarization
➡gensim.models.summarization
gensim.topic_coherence
➡gensim.models._coherence
gensim.utils
➡gensim.utils.utils
(old imports will continue to work)gensim.parsing.*
➡gensim.utils.text_utils
Assets
2
menshikh-iv released this
Assets
33
- gensim-3.7.1-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 23.5 MB
- gensim-3.7.1-cp27-cp27m-manylinux1_i686.whl 23 MB
- gensim-3.7.1-cp27-cp27m-manylinux1_x86_64.whl 23 MB
- gensim-3.7.1-cp27-cp27m-win32.whl 22.8 MB
- gensim-3.7.1-cp27-cp27m-win_amd64.whl 22.9 MB
- gensim-3.7.1-cp27-cp27mu-manylinux1_i686.whl 23 MB
- gensim-3.7.1-cp27-cp27mu-manylinux1_x86_64.whl 23 MB
- gensim-3.7.1-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 23.5 MB
- gensim-3.7.1-cp35-cp35m-manylinux1_i686.whl 23 MB
- gensim-3.7.1-cp35-cp35m-manylinux1_x86_64.whl 23.1 MB
- gensim-3.7.1-cp35-cp35m-win32.whl 22.9 MB
- gensim-3.7.1-cp35-cp35m-win_amd64.whl 23 MB
- gensim-3.7.1-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 23.5 MB
- gensim-3.7.1-cp36-cp36m-manylinux1_i686.whl 23 MB
- gensim-3.7.1-cp36-cp36m-manylinux1_x86_64.whl 23.1 MB
- gensim-3.7.1-cp36-cp36m-win32.whl 22.9 MB
- gensim-3.7.1-cp36-cp36m-win_amd64.whl 23 MB
- gensim-3.7.1-cp37-cp37m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 23.5 MB
- gensim-3.7.1-cp37-cp37m-manylinux1_i686.whl 23 MB
- gensim-3.7.1-cp37-cp37m-manylinux1_x86_64.whl 23.1 MB
- gensim-3.7.1-cp37-cp37m-win32.whl 22.9 MB
- gensim-3.7.1-cp37-cp37m-win_amd64.whl 23 MB
- gensim-3.7.1.tar.gz 22.3 MB
- gensim-3.7.1.win-amd64-py2.7.exe 23.1 MB
- gensim-3.7.1.win-amd64-py3.5.exe 23.6 MB
- gensim-3.7.1.win-amd64-py3.6.exe 23.6 MB
- gensim-3.7.1.win-amd64-py3.7.exe 23.6 MB
- gensim-3.7.1.win32-py2.7.exe 23 MB
- gensim-3.7.1.win32-py3.5.exe 23.4 MB
- gensim-3.7.1.win32-py3.6.exe 23.4 MB
- gensim-3.7.1.win32-py3.7.exe 23.4 MB
- Source code (zip)
- Source code (tar.gz)
3.7.1, 2019-01-31
👍
Improvements
- NMF optimization & documentation (@anotherbugmaster, #2361)
- Optimize
FastText.load_fasttext_model
(@mpenkov, #2340) - Add warning when string is used as argument to
Doc2Vec.infer_vector
(@tobycheese, #2347) - Fix light linting issues in
LdaSeqModel
(@horpto, #2360) - Move out
process_result_queue
from cycle inLdaMulticore
(@horpto, #2358)
🔴
Bug fixes
- Fix infinite diff in
LdaModel.do_mstep
(@horpto, #2344) - Fix backward compatibility issue: loading
FastTextKeyedVectors
usingKeyedVectors
(missing attributecompatible_hash
) (@menshikh-iv, #2349) - Fix logging issue (conda-forge related) (@menshikh-iv, #2339)
- Fix
WordEmbeddingsKeyedVectors.most_similar
(@Witiko, #2356) - Fix issues of
flake8==3.7.1
(@horpto, #2365)
📚
Tutorial and doc improvements
- Improve
FastText
documentation (@mpenkov, #2353) - Minor corrections and improvements in
Any*Vec
docstrings (@tobycheese, #2345) - Fix the example code for SparseTermSimilarityMatrix (@Witiko, #2359)
- Update
poincare
documentation to indicate the relation format (@AMR-KELEG, #2357)
⚠️
Deprecations (will be removed in the next major release)
-
Remove
gensim.models.wrappers.fasttext
(obsoleted by the new nativegensim.models.fasttext
implementation)gensim.examples
gensim.nosy
gensim.scripts.word2vec_standalone
gensim.scripts.make_wiki_lemma
gensim.scripts.make_wiki_online
gensim.scripts.make_wiki_online_lemma
gensim.scripts.make_wiki_online_nodebug
gensim.scripts.make_wiki
(all of these obsoleted by the new nativegensim.scripts.segment_wiki
implementation)- "deprecated" functions and attributes
-
Move
gensim.scripts.make_wikicorpus
➡gensim.scripts.make_wiki.py
gensim.summarization
➡gensim.models.summarization
gensim.topic_coherence
➡gensim.models._coherence
gensim.utils
➡gensim.utils.utils
(old imports will continue to work)gensim.parsing.*
➡gensim.utils.text_utils
menshikh-iv released this
Assets
2
3.7.0, 2019-01-18
🌟
New features
-
Fast Online NMF (@anotherbugmaster, #2007)
-
Benchmark
wiki-english-20171001
Model Perplexity Coherence L2 norm Train time (minutes) LDA 4727.07 -2.514 7.372 138 NMF 975.74 -2.814 7.265 73 NMF (with regularization) 985.57 -2.436 7.269 441 -
Simple to use (same interface as
LdaModel
)from gensim.models.nmf import Nmf from gensim.corpora import Dictionary import gensim.downloader as api text8 = api.load('text8') dictionary = Dictionary(text8) dictionary.filter_extremes() corpus = [ dictionary.doc2bow(doc) for doc in text8 ] nmf = Nmf( corpus=corpus, num_topics=5, id2word=dictionary, chunksize=2000, passes=5, random_state=42, ) nmf.show_topics() """ [(0, '0.007*"km" + 0.006*"est" + 0.006*"islands" + 0.004*"league" + 0.004*"rate" + 0.004*"female" + 0.004*"economy" + 0.003*"male" + 0.003*"team" + 0.003*"elections"'), (1, '0.006*"actor" + 0.006*"player" + 0.004*"bwv" + 0.004*"writer" + 0.004*"actress" + 0.004*"singer" + 0.003*"emperor" + 0.003*"jewish" + 0.003*"italian" + 0.003*"prize"'), (2, '0.036*"college" + 0.007*"institute" + 0.004*"jewish" + 0.004*"universidad" + 0.003*"engineering" + 0.003*"colleges" + 0.003*"connecticut" + 0.003*"technical" + 0.003*"jews" + 0.003*"universities"'), (3, '0.016*"import" + 0.008*"insubstantial" + 0.007*"y" + 0.006*"soviet" + 0.004*"energy" + 0.004*"info" + 0.003*"duplicate" + 0.003*"function" + 0.003*"z" + 0.003*"jargon"'), (4, '0.005*"software" + 0.004*"games" + 0.004*"windows" + 0.003*"microsoft" + 0.003*"films" + 0.003*"apple" + 0.003*"video" + 0.002*"album" + 0.002*"fiction" + 0.002*"characters"')] """
-
See also:
-
-
Massive improvement of
FastText
compatibilities (@mpenkov, #2313)from gensim.models import FastText # 'cc.ru.300.bin' - Russian Facebook FT model trained on Common Crawl # Can be downloaded from https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ru.300.bin.gz model = FastText.load_fasttext_format("cc.ru.300.bin") # Fixed hash-function allow to produce same output as FB FastText & works correctly for non-latin languages (for example, Russian) assert "мяу" in m.wv.vocab # 'мяу' - vocab word model.wv.most_similar("мяу") """ [('Мяу', 0.6820122003555298), ('МЯУ', 0.6373013257980347), ('мяу-мяу', 0.593108594417572), ('кис-кис', 0.5899622440338135), ('гав', 0.5866007804870605), ('Кис-кис', 0.5798211097717285), ('Кис-кис-кис', 0.5742273330688477), ('Мяу-мяу', 0.5699705481529236), ('хрю-хрю', 0.5508339405059814), ('ав-ав', 0.5479759573936462)] """ assert "котогород" not in m.wv.vocab # 'котогород' - out-of-vocab word model.wv.most_similar("котогород", topn=3) """ [('автогород', 0.5463314652442932), ('ТагилНовокузнецкНовомосковскНовороссийскНовосибирскНовотроицкНовочеркасскНовошахтинскНовый', 0.5423436164855957), ('областьНовосибирскБарабинскБердскБолотноеИскитимКарасукКаргатКуйбышевКупиноОбьТатарскТогучинЧерепаново', 0.5377570390701294)] """ # Now we load full model, for this reason, we can continue an training from gensim.test.utils import datapath from smart_open import smart_open with smart_open(datapath("crime-and-punishment.txt"), encoding="utf-8") as infile: # russian text corpus = [line.strip().split() for line in infile] model.train(corpus, total_examples=len(corpus), epochs=5)
-
Similarity search improvements (@Witiko, #2016)
-
Add similarity search using the Levenshtein distance in
gensim.similarities.LevenshteinSimilarityIndex
-
Performance optimizations to
gensim.similarities.SoftCosineSimilarity
(full benchmark)dictionary size corpus size speed 1000 100 1.0× 1000 1000 53.4× 1000 100000 156784.8× 100000 100 3.8× 100000 1000 405.8× 100000 100000 66262.0× -
See updated soft-cosine tutorial for more information and usage examples
-
-
Add
python3.7
support (@menshikh-iv, #2211)- Wheels for Window, OSX and Linux platforms (@menshikh-iv, MacPython/gensim-wheels/#12)
- Faster installation
👍
Improvements
Optimizations
- Reduce
Phraser
memory usage (drop frequencies) (@jenishah, #2208) - Reduce memory consumption of summarizer (@horpto, #2298)
- Replace inline slow equivalent of mean_absolute_difference with fast (@horpto, #2284)
- Reuse precalculated updated prior in
ldamodel.update_dir_prior
(@horpto, #2274) - Improve
KeyedVector.wmdistance
(@horpto, #2326) - Optimize
remove_unreachable_nodes
ingensim.summarization
(@horpto, #2263) - Optimize
mz_entropy
fromgensim.summarization
(@horpto, #2267) - Improve
filter_extremes
methods inDictionary
andHashDictionary
(@horpto, #2303)
Additions
- Add
KeyedVectors.relative_cosine_similarity
(@rsdel2007, #2307) - Add
random_seed
toLdaMallet
(@Zohaggie & @menshikh-iv, #2153) - Add
common_terms
parameter tosklearn_api.PhrasesTransformer
(@pmlk, #2074) - Add method for patch
corpora.Dictionary
based on special tokens (@Froskekongen, #2200)
Cleanup
- Improve
six
usage (xrange
,map
,zip
) (@horpto, #2264) - Refactor
line2doc
methods ofLowCorpus
andMalletCorpus
(@horpto, #2269) - Get rid most of warnings in testing (@menshikh-iv, #2191)
- Fix non-deterministic test failures (pin
PYTHONHASHSEED
) (@menshikh-iv, #2196) - Fix "aliasing chunkize to chunkize_serial" warning on Windows (@aquatiko, #2202)
- Remove
__getitem__
code duplication ingensim.models.phrases
(@jenishah, #2206) - Add
flake8-rst
for docstring code examples (@kataev, #2192) - Get rid
py26
stuff (@menshikh-iv, #2214) - Use
itertools.chain
instead ofsum
to concatenate lists (@Stigjb, #2212) - Fix flake8 warnings W605, W504 (@horpto, #2256)
- Remove unnecessary creations of lists at all (@horpto, #2261)
- Fix extra list creation in
utils.get_max_id
(@horpto, #2254) - Fix deprecation warning
np.sum(generator)
(@rsdel2007, #2296) - Refactor
BM25
(@horpto, #2275) - Fix pyemd import (@ramprakash-94, #2240)
- Set
metadata=True
formake_wikicorpus
script by default (@Xinyi2016, #2245) - Remove unimportant warning from
Phrases
(@rsdel2007, #2331) - Replace
open()
bysmart_open()
ingensim.models.fasttext._load_fasttext_format
(@rsdel2007, #2335)
🔴
Bug fixes
- Fix overflow error for
*Vec
corpusfile-based training (@bm371613, #2239) - Fix
malletmodel2ldamodel
conversion (@horpto, #2288) - Replace custom epsilons with numpy equivalent in
LdaModel
(@horpto, #2308) - Add missing content to tarball (@menshikh-iv, #2194)
- Fixes divided by zero when w_star_count==0 (@allenyllee, #2259)
- Fix check for callbacks (@allenyllee, #2251)
- Fix
SvmLightCorpus.serialize
iflabels
instance of numpy.ndarray (@aquatiko, #2243) - Fix poincate viz incompatibility with
plotly>=3.0.0
(@jenishah, #2226) - Fix
keep_n
behavior forDictionary.filter_extremes
(@johann-petrak, #2232) - Fix for
sphinx==1.8.1
(last r (@menshikh-iv, #None) - Fix
np.issubdtype
warnings (@marioyc, #2210) - Drop wrong key
-c
fromgensim.downloader
description (@horpto, #2262) - Fix gensim build (docs & pyemd issues) (@menshikh-iv, #2318)
- Limit visdom version (avoid py2 issue from the latest visdom release) (@menshikh-iv, #2334)
- Fix visdom integration (using
viz.line()
instead ofviz.updatetrace()
) (@allenyllee, #2252)
📚
Tutorial and doc improvements
- Add gensim-data repo to
gensim.downloader
& fix rendering of code examples (@menshikh-iv, #2327) - Fix typos in
gensim.models
(@rsdel2007, #2323) - Fixed typos in notebooks (@rsdel2007, #2322)
- Update
Doc2Vec
documentation: how tags are assigned incorpus_file
mode (@persiyanov, #2320) - Fix typos in
gensim/models/keyedvectors.py
(@rsdel2007, #2290) - Add documentation about ranges to scoring functions for
Phrases
(@jenishah, #2242) - Update return sections for
KeyedVectors.evaluate_word_*
(@Stigjb, #2205) - Fix return type in
KeyedVector.evaluate_word_analogies
(@Stigjb, #2207) - Fix
WmdSimilarity
documentation (@jagmoreira, #2217) - Replace
fify -> fifty
ingensim.parsing.preprocessing.STOPWORDS
(@coderwassananmol, #2220) - Remove
alpha="auto"
fromLdaMulticore
(not supported yet) (@johann-petrak, #2225) - Update Adopters in README (@piskvorky, #2234)
- Fix broken link in
tutorials.md
(@rsdel2007, #2302)
⚠️
Deprecations (will be removed in the next major release)
-
Remove
gensim.models.wrappers.fasttext
(obsoleted by the new nativegensim.models.fasttext
implementation)gensim.examples
gensim.nosy
gensim.scripts.word2vec_standalone
gensim.scripts.make_wiki_lemma
gensim.scripts.make_wiki_online
gensim.scripts.make_wiki_online_lemma
gensim.scripts.make_wiki_online_nodebug
gensim.scripts.make_wiki
(all of these obsoleted by the new nativegensim.scripts.segment_wiki
implementation)- "deprecated" functions and attributes
-
Move
gensim.scripts.make_wikicorpus
➡gensim.scripts.make_wiki.py
gensim.summarization
➡gensim.models.summarization
gensim.topic_coherence
➡gensim.models._coherence
gensim.utils
➡gensim.utils.utils
(old imports will continue to work)gensim.parsing.*
➡gensim.utils.text_utils
menshikh-iv released this
Assets
26
- gensim-3.6.0-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 22.9 MB
- gensim-3.6.0-cp27-cp27m-manylinux1_i686.whl 22.5 MB
- gensim-3.6.0-cp27-cp27m-manylinux1_x86_64.whl 22.5 MB
- gensim-3.6.0-cp27-cp27m-win32.whl 22.3 MB
- gensim-3.6.0-cp27-cp27m-win_amd64.whl 22.4 MB
- gensim-3.6.0-cp27-cp27mu-manylinux1_i686.whl 22.5 MB
- gensim-3.6.0-cp27-cp27mu-manylinux1_x86_64.whl 22.5 MB
- gensim-3.6.0-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 22.9 MB
- gensim-3.6.0-cp35-cp35m-manylinux1_i686.whl 22.5 MB
- gensim-3.6.0-cp35-cp35m-manylinux1_x86_64.whl 22.5 MB
- gensim-3.6.0-cp35-cp35m-win32.whl 22.4 MB
- gensim-3.6.0-cp35-cp35m-win_amd64.whl 22.5 MB
- gensim-3.6.0-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 22.9 MB
- gensim-3.6.0-cp36-cp36m-manylinux1_i686.whl 22.5 MB
- gensim-3.6.0-cp36-cp36m-manylinux1_x86_64.whl 22.5 MB
- gensim-3.6.0-cp36-cp36m-win32.whl 22.4 MB
- gensim-3.6.0-cp36-cp36m-win_amd64.whl 22.5 MB
- gensim-3.6.0.tar.gz 22.1 MB
- gensim-3.6.0.win-amd64-py2.7.exe 22.6 MB
- gensim-3.6.0.win-amd64-py3.5.exe 23 MB
- gensim-3.6.0.win-amd64-py3.6.exe 23 MB
- gensim-3.6.0.win32-py2.7.exe 22.5 MB
- gensim-3.6.0.win32-py3.5.exe 22.9 MB
- gensim-3.6.0.win32-py3.6.exe 22.9 MB
- Source code (zip)
- Source code (tar.gz)
3.6.0, 2018-09-20
🌟
New features
-
File-based training for
*2Vec
models (@persiyanov, #2127 & #2078 & #2048)New training mode for
*2Vec
models (word2vec, doc2vec, fasttext) that allows model training to scale linearly with the number of cores (full GIL elimination). The result of our Google Summer of Code 2018 project by Dmitry Persiyanov.Benchmark on the full English Wikipedia, Intel(R) Xeon(R) CPU @ 2.30GHz 32 cores (GCE cloud), MKL BLAS:
Model Queue-based version [sec] File-based version [sec] speed up Accuracy (queue-based) Accuracy (file-based) Word2Vec 9230 2437 3.79x 0.754 (± 0.003) 0.750 (± 0.001) Doc2Vec 18264 2889 6.32x 0.721 (± 0.002) 0.683 (± 0.003) FastText 16361 10625 1.54x 0.642 (± 0.002) 0.660 (± 0.001) Usage:
import gensim.downloader as api from multiprocessing import cpu_count from gensim.utils import save_as_line_sentence from gensim.test.utils import get_tmpfile from gensim.models import Word2Vec, Doc2Vec, FastText # Convert any corpus to the needed format: 1 document per line, words delimited by " " corpus = api.load("text8") corpus_fname = get_tmpfile("text8-file-sentence.txt") save_as_line_sentence(corpus, corpus_fname) # Choose num of cores that you want to use (let's use all, models scale linearly now!) num_cores = cpu_count() # Train models using all cores w2v_model = Word2Vec(corpus_file=corpus_fname, workers=num_cores) d2v_model = Doc2Vec(corpus_file=corpus_fname, workers=num_cores) ft_model = FastText(corpus_file=corpus_fname, workers=num_cores)
👍
Improvements
- Add scikit-learn wrapper for
FastText
(@mcemilg, #2178) - Add multiprocessing support for
BM25
(@Shiki-H, #2146) - Add
name_only
option for downloader api (@aneesh-joshi, #2143) - Make
word2vec2tensor
script compatible withpython3
(@vsocrates, #2147) - Add custom filter for
Wikicorpus
(@mattilyra, #2089) - Make
similarity_matrix
support non-contiguous dictionaries (@Witiko, #2047)
🔴
Bug fixes
- Fix memory consumption in
AuthorTopicModel
(@philipphager, #2122) - Correctly process empty documents in
AuthorTopicModel
(@probinso, #2133) - Fix ZeroDivisionError
keywords
issue with short input (@LShostenko, #2154) - Fix
min_count
handling in phrases detection usingnpmi_scorer
(@lopusz, #2072) - Remove duplicate count from
Phraser
log message (@robguinness, #2151) - Replace
np.integer
->np.int
inAuthorTopicModel
(@menshikh-iv, #2145)
📚
Tutorial and doc improvements
- Update docstring with new analogy evaluation method (@akutuzov, #2130)
- Improve
prune_at
parameter description forgensim.corpora.Dictionary
(@yxonic, #2128) - Fix
default
->auto
prior parameter in documentation for lda-related models (@Laubeee, #2156) - Use heading instead of bold style in
gensim.models.translation_matrix
(@nzw0301, #2164) - Fix quote of vocabulary from
gensim.models.Word2Vec
(@nzw0301, #2161) - Replace deprecated parameters with new in docstring of
gensim.models.Doc2Vec
(@xuhdev, #2165) - Fix formula in Mallet documentation (@Laubeee, #2186)
- Fix minor semantic issue in docs for
Phrases
(@RunHorst, #2148) - Fix typo in documentation (@KenjiOhtsuka, #2157)
- Additional documentation fixes (@piskvorky, #2121)
⚠️
Deprecations (will be removed in the next major release)
-
Remove
gensim.models.wrappers.fasttext
(obsoleted by the new nativegensim.models.fasttext
implementation)gensim.examples
gensim.nosy
gensim.scripts.word2vec_standalone
gensim.scripts.make_wiki_lemma
gensim.scripts.make_wiki_online
gensim.scripts.make_wiki_online_lemma
gensim.scripts.make_wiki_online_nodebug
gensim.scripts.make_wiki
(all of these obsoleted by the new nativegensim.scripts.segment_wiki
implementation)- "deprecated" functions and attributes
-
Move
gensim.scripts.make_wikicorpus
➡gensim.scripts.make_wiki.py
gensim.summarization
➡gensim.models.summarization
gensim.topic_coherence
➡gensim.models._coherence
gensim.utils
➡gensim.utils.utils
(old imports will continue to work)gensim.parsing.*
➡gensim.utils.text_utils
Watchers:448 |
Star:11577 |
Fork:3890 |
创建时间: 2011-02-10 15:43:04 |
最后Commits: 2月前 |
8624aa2
Compare
4.0.0beta, 2020-10-31
Main highlights
Massively optimized popular algorithms the community has grown to love: fastText, word2vec, doc2vec, phrases:
a. Efficiency
wall time / peak RAM / throughput
wall time / peak RAM / throughput
In other words, fastText now needs 3x less RAM (and is faster); word2vec has 2x faster init (and needs less RAM, and is faster); detecting collocation phrases is 2x faster. 4.0 benchmarks.
b. Robustness. We fixed a bunch of long-standing bugs by refactoring the internal code structure (see
🔴
Bug fixes below)
c. Simplified OOP model for easier model exports and integration with TensorFlow, PyTorch &co.
These improvements come to you transparently aka "for free", but see Migration guide for some changes that break the old Gensim 3.x API. Update your code accordingly.
Dropped a bunch of externally contributed modules: summarization, pivoted TFIDF normalization, FIXME.
Code quality was not up to our standards. Also there was no one to maintain them, answer user questions, support these modules.
So rather than let them rot, we took the hard decision of removing these contributed modules from Gensim. If anyone's interested in maintaining them please fork into your own repo, they can live happily outside of Gensim.
Dropped Python 2. Gensim 4.0 is Py3.6+. Read our Python version support policy.
A new Gensim website – finally!
🙃
So, a major clean-up release overall. We're happy with this tighter, leaner and faster Gensim.
This is the direction we'll keep going forward: less kitchen-sink of "latest academic fad", more focus on robust engineering, targetting common NLP & document similarity use-cases.
Why a pre-release?
This 4.0.0beta pre-release is for users who want the cutting edge performance and bug fixes. Plus users who want to help out, by testing and providing feedback: code, documentation, workflows… Please let us know on the mailing list!
Install the pre-release with:
What will change between this pre-release and a "full" 4.0 release?
Check progress here.
max_final_vocab
parameter in fastText constructor, by @mpenkovalpha
parameter in LDA model, by @xh2save_facebook_model
failure after update-vocab & other initialization streamlining, by @gojomoxml.etree.cElementTree
, by @hugovksimilarities.index
to the more appropriatesimilarities.annoy
, by @piskvorkynum_words
totopn
in dtm_coherence, by @MeganStodel