在R中训练朴素贝叶斯模型时的问题

问题描述 投票:0回答:1

我正在使用Caret程序包(没有使用Caret的丰富经验)来通过Naive Bayes训练我的数据,如下面的R代码所述。我在执行“ nb_model”时包含句子时遇到问题,因为它会产生一系列错误消息,这些错误消息是:

1: predictions failed for Fold1: usekernel= TRUE, fL=0, adjust=1 Error in 
predict.NaiveBayes(modelFit, newdata) : 
Not all variable names used in object found in newdata

2: model fit failed for Fold1: usekernel=FALSE, fL=0, adjust=1 Error in 
NaiveBayes.default(x, y, usekernel = FALSE, fL = param$fL, ...) : 

请您提出以下如何修改R代码以解决该问题的建议?

Dataset used in the R code below

数据集外观的快速示例(10个变量):

  Over arrested at in | Negative | Negative | Neutral | Neutral | Neutral | Negative |
  Positive | Neutral | Negative
library(caret)

# Loading dataset
setwd("directory/path")
TrainSet = read.csv("textsent.csv", header = FALSE)

# Specifying an 80-20 train-test split
# Creating the training and testing sets
train = TrainSet[1:1200, ]
test = TrainSet[1201:1500, ]

# Declaring the trainControl function
train_ctrl = trainControl(
  method  = "cv", #Specifying Cross validation
  number  = 3, # Specifying 3-fold
)

nb_model = train(
  V10 ~., # Specifying the response variable and the feature variables
  method = "nb", # Specifying the model to use
  data = train, 
  trControl = train_ctrl,
)

# Get the predictions of your model in the test set
predictions = predict(nb_model, newdata = test)

# See the confusion matrix of your model in the test set
confusionMatrix(predictions, test$V10)
r r-caret
1个回答
1
投票

数据集是所有字符数据。在该数据中,包含易于编码的单词(V2-V10)和句子的组合,您可以对它们进行任意数量的特征工程处理并生成任意数量的特征。

要阅读文本挖掘,请查看tm程序包,其文档或hack-r.com之类的博客以获取实际示例。这是链接文章中的一些Github code

好的,因为您的stringsAsFactors = F有很多独特的句子,所以我先设置V1

TrainSet <- read.csv(url("https://raw.githubusercontent.com/jcool12/dataset/master/textsentiment.csv?token=AA4LAP5VXI6I7FRKMT6HDPK6U5XBY"),
                     header = F,
                     stringsAsFactors = F)

library(caret)

然后我做了特征工程

## Feature Engineering
# V2 - V10
TrainSet[TrainSet=="Negative"] <- 0
TrainSet[TrainSet=="Positive"] <- 1

# V1 - not sure what you wanted to do with this
#     but here's a simple example of what 
#     you could do
TrainSet$V1 <- grepl("london", TrainSet$V1) # tests if london is in the string

然后它起作用了,尽管您想优化V1的工程设计(或删除它)以获得更好的结果。

# In reality you could probably generate 20+ decent features from this text
#  word count, tons of stuff... see the tm package

# Specifying an 80-20 train-test split
# Creating the training and testing sets
train = TrainSet[1:1200, ]
test = TrainSet[1201:1500, ]

# Declaring the trainControl function
train_ctrl = trainControl(
  method  = "cv", # Specifying Cross validation
  number  = 3,    # Specifying 3-fold
)

nb_model = train(
  V10 ~., # Specifying the response variable and the feature variables
  method = "nb", # Specifying the model to use
  data = train, 
  trControl = train_ctrl,
)

# Resampling: Cross-Validated (3 fold) 
# Summary of sample sizes: 799, 800, 801 
# Resampling results across tuning parameters:
#   
#   usekernel  Accuracy   Kappa    
# FALSE      0.6533444  0.4422346
# TRUE      0.6633569  0.4185751

在此基本示例中,您会收到一些可忽略的警告,仅因为V1中很少的句子包含单词“ london”。我建议将该列用于情感分析,术语频率/反向文档频率等。

© www.soinside.com 2019 - 2024. All rights reserved.