在神经网络和caret(R)中设置隐藏层和神经元。

问题描述 投票:0回答:1

我想用包中的 neuralnetcaret.

数据 df 可以从 此职位.

当运行 neuralnet() 函数,有一个参数叫做 hidden 其中你可以设置每个隐藏层和神经元。假设我想要2个隐藏层,分别有3个和2个神经元。它将被写成 hidden = c(3, 2).

然而,由于我想对它进行交叉验证,我决定使用神奇的 caret 包。但当使用 train()我不知道如何设置层数和神经元的数量。

有谁知道我在哪里可以添加这些数字?

这是我运行的代码。

nn <- caret::train(DC1 ~ ., data=df, 
                   method = "neuralnet", 
                   #tuneGrid = tune.grid.neuralnet,
                   metric = "RMSE",
                   trControl = trainControl (
                     method = "cv", number = 10,
                     verboseIter = TRUE
))

顺便说一下,我收到了一些警告 与前面的代码。

predictions failed for Fold01: layer1=3, layer2=0, layer3=0 Error in cbind(1, pred) %*% weights[[num_hidden_layers + 1]] : 
  requires numeric/complex matrix/vector arguments

有什么办法可以解决这个问题吗?

r neural-network cross-validation r-caret
1个回答
1
投票

当在caret中使用神经网络模型时,为了指定三个支持层中每个层的隐藏单元数,你可以使用参数 layer1, layer2layer3. 我查了一下 源头.

library(caret)

grid <-  expand.grid(layer1 = c(32, 16),
                     layer2 = c(32, 16),
                     layer3 = 8)

使用BostonHousing数据的案例。

library(mlbench)

data(BostonHousing)

让我们只是选择数字列的例子,使其简单。

BostonHousing[,sapply(BostonHousing, is.numeric)] -> df

nn <- train(medv ~ ., 
            data = df, 
            method = "neuralnet", 
            tuneGrid = grid,
            metric = "RMSE",
            preProc = c("center", "scale", "nzv"), #good idea to do this with neural nets - your error is due to non scaled data
            trControl = trainControl(
              method = "cv",
              number = 5,
              verboseIter = TRUE)
            )

这个部分

preProc = c("center", "scale", "nzv")

是算法收敛的必要条件,神经网络不喜欢未缩放的特征。

不过它的速度超慢。

nn
#output
Neural Network 

506 samples
 12 predictor

Pre-processing: centered (12), scaled (12) 
Resampling: Cross-Validated (5 fold) 
Summary of sample sizes: 405, 404, 404, 405, 406 
Resampling results across tuning parameters:

  layer1  layer2  RMSE      Rsquared   MAE     
  16      16           NaN        NaN       NaN
  16      32      4.177368  0.8113711  2.978918
  32      16      3.978955  0.8275479  2.822114
  32      32      3.923646  0.8266605  2.783526

Tuning parameter 'layer3' was held constant at a value of 8
RMSE was used to select the optimal model using the smallest value.
The final values used for the model were layer1 = 32, layer2 = 32 and layer3 = 8.
© www.soinside.com 2019 - 2024. All rights reserved.