我试图计算 1500 个 IDS 中最常出现的二元组(每行 1 个 ID,有 1 个事件),而不计算每个 ID(行)中超过 1 倍的二元组。例如,如果我有以下 ID,我只想在每个 ID 中将“工作日”计为 1 x。 “工作日”在我的分析中应出现的次数摘要应为 2。一旦“工作日”被计入 ID,我就不希望再次对其进行计数。
ID Text
1 "The work day was horrible. On this particular work day, I made 5 mistakes....."
2 "This long work day was the best for me. I miss a long work day, because I get into a rhythm....."
这是我的代码。它以直方图的形式给出 40 个最常出现的二元组的总计数,显示 2 个单词的二元组和计数。我不确定它是否计算了上面列出的每个 ID 超过 1 倍的二元组的出现次数,尽管我确实相信它只是获取所有“事件”并计算 2 个单词二元组发生的次数,而与 ID 无关。
Sum1 %>%
unnest_tokens(word, "Event", token = "ngrams", n = 2) %>%
separate(word, c("word1", "word2"), sep = " ") %>%
filter(!word1 %in% stop_words$word) %>%
filter(!word2 %in% stop_words$word) %>%
unite(word,word1, word2, sep = " ") %>%
count(word, sort = TRUE) %>%
slice(1:40) %>%
ggplot() + geom_bar(aes(x=reorder(word,n), y=n), stat = "identity", fill = "#de5833") +
theme_minimal() +
coord_flip()
有这样的事吗?
library(tidytext)
library(dplyr)
d <- data.frame(ID = 1:2,
txt = c('a particular word',
'a particular word a phrase and a particular word')
)
## > d
ID txt
1 1 a particular word
2 2 a particular word a phrase and a particular word
使用基本 R
strsplit
和 Filter
从原始文本中过滤掉停用词,最后 distinct
仅保留每个 ID 的唯一二元组:
d |>
rowwise() |>
mutate(txt = strsplit(txt, split = '\\s')[[1]] |>
Filter(f = \(x) !(x %in% get_stopwords()$word)) |>
paste(collapse = ' ')
) |>
unnest_tokens(input = txt, output = 'tokens',
token = 'ngrams', n = 2) |>
distinct(ID, tokens)
(
strsplit
返回一个列表,其单个项目(单词向量)必须在[[1]]
ing之前用Filter
来拔取)
输出:
+ # A tibble: 4 x 2
# Rowwise:
ID tokens
<int> <chr>
1 1 particular word
2 2 particular word
3 2 word phrase
4 2 phrase particular