问题描述
我使用 sklearn 为使用命令计算文档的 TFIDF(词频逆文档频率)值:
I used sklearn for calculating TFIDF (Term frequency inverse document frequency) values for documents using command as :
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(documents)
from sklearn.feature_extraction.text import TfidfTransformer
tf_transformer = TfidfTransformer(use_idf=False).fit(X_train_counts)
X_train_tf = tf_transformer.transform(X_train_counts)
X_train_tf
是 (2257, 35788)
形状的 scipy.sparse
矩阵.
X_train_tf
is a scipy.sparse
matrix of shape (2257, 35788)
.
如何为特定文档中的单词获取 TF-IDF?更具体地说,如何获取给定文档中具有最大 TF-IDF 值的单词?
How can I get TF-IDF for words in a particular document? More specific, how to get words with maximum TF-IDF values in a given document?
推荐答案
你可以使用 sklean 的 TfidfVectorizer
You can use TfidfVectorizer from sklean
from sklearn.feature_extraction.text import TfidfVectorizer
import numpy as np
from scipy.sparse.csr import csr_matrix #need this if you want to save tfidf_matrix
tf = TfidfVectorizer(input='filename', analyzer='word', ngram_range=(1,6),
min_df = 0, stop_words = 'english', sublinear_tf=True)
tfidf_matrix = tf.fit_transform(corpus)
上面的tfidf_matix有语料库中所有文档的TF-IDF值.这是一个很大的稀疏矩阵.现在,
The above tfidf_matix has the TF-IDF values of all the documents in the corpus. This is a big sparse matrix. Now,
feature_names = tf.get_feature_names()
这为您提供了所有标记或 n-gram 或单词的列表.对于语料库中的第一个文档,
this gives you the list of all the tokens or n-grams or words.For the first document in your corpus,
doc = 0
feature_index = tfidf_matrix[doc,:].nonzero()[1]
tfidf_scores = zip(feature_index, [tfidf_matrix[doc, x] for x in feature_index])
让我们打印它们,
for w, s in [(feature_names[i], s) for (i, s) in tfidf_scores]:
print w, s
这篇关于sklearn:TFIDF 转换器:如何获取文档中给定单词的 tf-idf 值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!