我如何通过R中的每次观察找到最常用的单词?

问题描述 投票:0回答:2

我对NLP很陌生。拜托,不要严格地评判我。

我对客户的反馈有非常大的数据框架,我的目标是分析反馈。我在反馈中标记了单词,删除了停用词(SMART)。现在,我需要接收一个使用频率最高和较少的单词表。

代码看起来像这样:

library(tokenizers)
library(stopwords)
words_as_tokens <- 
     tokenize_words(dat$description, 
                    stopwords = stopwords(language = "en", source = "smart"))

数据帧看起来像这样:有很多反馈(可变的“描述”),并且给出反馈的客户(每个客户不是唯一的,可以重复)。我想收到一个包含3列的表格:a)客户名称b)单词c)它的频率。此“排名”应按降序排列。

r nlp text-mining
2个回答
1
投票

尝试一下

library(tokenizers)
library(stopwords)
library(tidyverse)

# count freq of words
words_as_tokens <- setNames(lapply(sapply(dat$description, 
                                 tokenize_words, 
                                 stopwords = stopwords(language = "en", source = "smart")), 
                          function(x) as.data.frame(sort(table(x), TRUE), stringsAsFactors = F)), dat$name)

# tidyverse's job
df <- words_as_tokens %>%
  bind_rows(, .id = "name") %>%
  rename(word = x)

# output
df

#    name          word Freq
# 1  John    experience    2
# 2  John          word    2
# 3  John    absolutely    1
# 4  John        action    1
# 5  John        amazon    1
# 6  John     amazon.ae    1
# 7  John     answering    1
# ....
# 42 Alex         break    2
# 43 Alex          nice    2
# 44 Alex         times    2
# 45 Alex             8    1
# 46 Alex        accent    1
# 47 Alex        africa    1
# 48 Alex        agents    1
# ....

数据

dat <- data.frame(name = c("John", "Alex"),
                  description = c("Unprecedented. The perfect word to describe Amazon. In every positive sense of that word! All because of one man - Jeff Bezos. What an entrepreneur! What a vision! This is from personal experience. Let me explain. I had given up all hope, after a horrible experience with Amazon.ae (formerly Souq.com) - due to a Herculean effort to get an order cancelled and the subsequent refund issued. I have never faced such a feedback-resistant team in my life! They were robotically answering my calls and sending me monotonous, unhelpful emails, followed by absolutely zero action!",
                                 "Not only does Amazon have great products but their Customer Service for the most part is wonderful. Although most times you are outsourced to a different country, I personally have found that when I call it's either South Africa or Philippines and they speak so well, understand me and my NY accent and are quite nice. Let’s face it. Most times you are calling CS with a problem or issue. These agents have to listen to 8 hours of complaints so they themselves need a break. No matter how annoyed I am I try to be on my best behavior and as nice as can be because they too need a break with how nasty we as a society can be."), stringsAsFactors = F)


0
投票

您可以尝试使用quanteda以及以下方法:

library(quanteda)
# define a corpus object to store your initial documents
mycorpus = corpus(dat$description)
# convert the corpus to a Document-Feature Matrix
mydfm = dfm( mycorpus, 
             tolower = TRUE, 
             remove = stopwords(),  # this removes English stopwords
             remove_punct = TRUE,   # this removes punctuation
             remove_numbers = TRUE, # this removes digits
             remove_symbol = TRUE,  # this removes symbols 
             remove_url = TRUE )     # this removes urls

# calculate word frequencies and return a data.frame
word_frequencies = textstat_frequency( mydfm )
© www.soinside.com 2019 - 2024. All rights reserved.