TF-IDF The concept of

TF-IDF yes Term Frequency -  Inverse Document
Frequency Abbreviation for , Namely “ word frequency - Reverse text frequency ”. It consists of two parts ,TF and IDF.

TF I used the strategy in previous articles on high frequency word extraction ,TF Used to express word frequency , That is, the total number of times a word appears in the article , that is :

TF= The total number of times a word appears in the article

But considering the length of each article is different , So we can standardize the above contents :

TF= The total number of times a word appears in the article / The total number of words in the article

IDF Used to represent the inverse document frequency , The so-called inverse document frequency is actually used to reflect the frequency of a word in all documents , When a word appears in many documents , Its corresponding IDF The value should also be lower , When a word appears in very few documents , Its corresponding IDF The value goes up , Let's put it in a formula :

IDF=log( Total number of documents in Corpus /( The number of documents that contain the word +1))

Here we are +1 The purpose of this is to avoid denominators with no words in the document 0.

Now we know TF,IDF What do they represent , So we can get it, too TF-IDF:

TF-IDF=TF*IDF

And according to the above properties, we can draw a conclusion :TF-IDF It is proportional to the number of times a word appears in a document , It is inversely proportional to the number of occurrences of the word in the whole corpus .

TF-IDF Implementation of

We got it TF-IDF After what , Let's implement the algorithm in different ways .

One , use gensim To calculate TF-IDF

First, we set up a corpus and process word segmentation :
# Building a corpus corpus = [ "what is the weather like today", "what is for dinner
tonight", "this is a question worth pondering", "it is a beautiful day today" ]
# Carry out word segmentation words = [] for i in corpus: words.append(i.split(" ")) print(words)
The results are as follows :

Next, let's calculate the number of times each word appears in the current document :
# Give each word one ID And count the number of times each word appears in the current document dic = corpora.Dictionary(words) new_corpus =
[dic.doc2bow(text) for text in words] print(new_corpus) print(dic.token2id)
The results are as follows :

doc2bow Function is mainly used to make dic The internal use of bow Bag of words model , Each of the first words in brackets stands for ID The second number represents the number of occurrences in the current document .( Possible examples are poorly chosen , The number of times each word appears here is 1)

token2id It is mainly used to output data of a dictionary type , The data format is :{ Word , Corresponding words id}

If it is id2token Then it is :{ word id, Corresponding words }, You can use that form here .

Then we have to train gensim Model and save , And tested :
# Train the model and save it tfidf = models.TfidfModel(new_corpus) tfidf.save("my_model.tfidf") #
Load model tfidf = models.TfidfModel.load("my_model.tfidf") # Use the trained model to calculate TF-IDF value
string = "i like the weather today" string_bow =
dic.doc2bow(string.lower().split()) string_tfidf = tfidf[string_bow]
print(string_tfidf)
give the result as follows :

From the result, we can see that the result of training is word on the left ID On the right is word tfidf value , But for the words that we didn't train when we trained the model , Don't show it in the results .

Two ,sklearn To calculate TF-IDF

sklearn It's better to use than gensim It's much more convenient , It is mainly used sklearn Medium TfidfVectorizer:
from sklearn.feature_extraction.text import TfidfVectorizer corpus = [ "what
is the weather like today", "what is for dinner tonight", "this is a question
worth pondering", "it is a beautiful day today" ] tfidf_vec = TfidfVectorizer()
# utilize fit_transform obtain TF-IDF matrix tfidf_matrix = tfidf_vec.fit_transform(corpus) #
utilize get_feature_names Get words that don't repeat print(tfidf_vec.get_feature_names()) # Get the corresponding value of each word ID
print(tfidf_vec.vocabulary_) # output TF-IDF matrix print(tfidf_matrix)
Some reference results are as follows :

Three , use Python Manual implementation TF-IDF algorithm

In the above, we used two kinds of library functions to calculate the value of each word in the custom corpus TF-IDF value , Now let's do it manually TF-IDF:
import math corpus = [ "what is the weather like today", "what is for dinner
tonight", "this is a question worth pondering", "it is a beautiful day today" ]
words = [] # Yes corpus participle for i in corpus: words.append(i.split()) #
If there is a custom stop dictionary , We can use the following methods to segment words and remove stop words # f = ["is", "the"] # for i in corpus: #
all_words = i.split() # new_words = [] # for j in all_words: # if j not in f: #
new_words.append(j) # words.append(new_words) # print(words) # Word frequency statistics def
Counter(word_list): wordcount = [] for i in word_list: count = {} for j in i:
if not count.get(j): count.update({j: 1}) elif count.get(j): count[j] += 1
wordcount.append(count) return wordcount wordcount = Counter(words) #
calculation TF(word Represents the calculated word ,word_list It is the dictionary after word segmentation in the document where the calculated word is located ) def tf(word, word_list): return
word_list.get(word) / sum(word_list.values()) # Count the number of sentences containing the word def
count_sentence(word, wordcount): return sum(1 for i in wordcount if
i.get(word)) # calculation IDF def idf(word, wordcount): return math.log(len(wordcount) /
(count_sentence(word, wordcount) + 1)) # calculation TF-IDF def tfidf(word, word_list,
wordcount): return tf(word, word_list) * idf(word, wordcount) p = 1 for i in
wordcount: print("part:{}".format(p)) p = p+1 for j, k in i.items():
print("word: {} ---- TF-IDF:{}".format(j, tfidf(j, i, wordcount)))
Some results after running are as follows :

summary

TF-IDF It is mainly used for the extraction of keywords in the article , It can also be used to find similar articles , Abstract the article , feature selection ( Extraction of important features ) work .

TF-IDF The advantage of the algorithm is simple and fast , The results are in accordance with the actual situation . The disadvantage is that , Simply with " word frequency " Measuring the importance of a word , Not comprehensive enough , Sometimes important words may not appear many times . and , This algorithm can not reflect the location information of words , Words in front of and words behind , Are considered equally important , This is not true .( One solution is , The first paragraph and the first sentence of each paragraph , Give more weight .)
 

 

 

 

Technology