本文介绍了Python TfidfVectorizer 抛出:空词汇;也许文档只包含停用词"的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用 Python 的 Tfidf 来转换文本语料库.但是,当我尝试对其进行 fit_transform 时,我收到一个值错误 ValueError:空词汇;也许文档只包含停用词.

I'm trying to use Python's Tfidf to transform a corpus of text.However, when I try to fit_transform it, I get a value error ValueError: empty vocabulary; perhaps the documents only contain stop words.

In [69]: TfidfVectorizer().fit_transform(smallcorp)
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-69-ac16344f3129> in <module>()
----> 1 TfidfVectorizer().fit_transform(smallcorp)

/Users/maxsong/anaconda/lib/python2.7/site-packages/sklearn/feature_extraction/text.pyc in fit_transform(self, raw_documents, y)
   1217         vectors : array, [n_samples, n_features]
   1218         """
-> 1219         X = super(TfidfVectorizer, self).fit_transform(raw_documents)
   1220         self._tfidf.fit(X)
   1221         # X is already a transformed view of raw_documents so

/Users/maxsong/anaconda/lib/python2.7/site-packages/sklearn/feature_extraction/text.pyc in fit_transform(self, raw_documents, y)
    778         max_features = self.max_features
    779
--> 780         vocabulary, X = self._count_vocab(raw_documents, self.fixed_vocabulary)
    781         X = X.tocsc()
    782

/Users/maxsong/anaconda/lib/python2.7/site-packages/sklearn/feature_extraction/text.pyc in _count_vocab(self, raw_documents, fixed_vocab)
    725             vocabulary = dict(vocabulary)
    726             if not vocabulary:
--> 727                 raise ValueError("empty vocabulary; perhaps the documents only"
    728                                  " contain stop words")
    729

ValueError: empty vocabulary; perhaps the documents only contain stop words

我在这里通读了 SO 问题:使用 a 的问题TfidfVectorizer scikit-learn 的自定义词汇 并尝试了 ogrisel 的建议,即使用 TfidfVectorizer(**params).build_analyzer()(dataset2) 检查文本分析步骤的结果,这似乎按预期工作:下面的片段:

I read through the SO question here: Problems using a custom vocabulary for TfidfVectorizer scikit-learn and tried ogrisel's suggestion of using TfidfVectorizer(**params).build_analyzer()(dataset2) to check the results of the text analysis step and that seems to be working as expected: snippet below:

In [68]: TfidfVectorizer().build_analyzer()(smallcorp)
Out[68]:
[u'due',
 u'to',
 u'lack',
 u'of',
 u'personal',
 u'biggest',
 u'education',
 u'and',
 u'husband',
 u'to',

我还有什么地方做错了吗?我提供给它的语料库只是一个巨大的长字符串,中间有换行符.

Is there something else that I am doing wrong? the corpus I am feeding it is just one giant long string punctuated by newlines.

谢谢!

推荐答案

我猜是因为你只有一个字符串.尝试将其拆分为字符串列表,例如:

I guess it's because you just have one string. Try splitting it into a list of strings, e.g.:

In [51]: smallcorp
Out[51]: 'Ah! Now I have done Philosophy,
I have finished Law and Medicine,
And sadly even Theology:
Taken fierce pains, from end to end.
Now here I am, a fool for sure!
No wiser than I was before:'

In [52]: tf = TfidfVectorizer()

In [53]: tf.fit_transform(smallcorp.split('
'))
Out[53]:
<6x28 sparse matrix of type '<type 'numpy.float64'>'
    with 31 stored elements in Compressed Sparse Row format>

这篇关于Python TfidfVectorizer 抛出:空词汇;也许文档只包含停用词"的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

07-01 08:13