R 在 doParallel 的 foreach 循环中执行 keras::unserialize_model() 时因段错误而崩溃

问题描述 投票:0回答:1

我遇到一个问题,在

keras::unserialize_model()
doParallel
循环中调用
foreach
时 R 崩溃。

我必须清理这段代码,所以希望我不会乱搞任何东西。而且我不是 R 开发人员;我正在尝试将其他人编写的一些 R 代码移至生产运行时环境中。

如果我运行此代码,它会阻止加载并且不会崩溃:

#unserialize models locally
my_model1 <- keras::unserialize_model(ser_model1)
my_model2 <- keras::unserialize_model(ser_model2)
my_model3 <- keras::unserialize_model(ser_model3)
my_model4 <- keras::unserialize_model(ser_model4)
my_model5 <- keras::unserialize_model(ser_model5)

我可以开始处理了。但如果我在

foreach()
循环中运行它:

places  <- list( of things to run )
r <- foreach(i=places, .export = c("ser_model1", "ser_model2", "ser_model3", "ser_model4", "ser_model5"),
                  .packages = c("dplyr","av","imager","jpeg","tensorflow","keras","stringr","reticulate","caTools","imagerExtra","raster","readr","gsignal","data.table")) %dopar% {
    #unserialize models locally
    my_model1 <- keras::unserialize_model(ser_model1)
    my_model2 <- keras::unserialize_model(ser_model2)
    my_model3 <- keras::unserialize_model(ser_model3)
    my_model4 <- keras::unserialize_model(ser_model4)
    my_model5 <- keras::unserialize_model(ser_model5)

    # lots of processing here
    # eventually some_results <- whatever_computation()
    
    return(some_results)
}

然后代码因

keras::unserialize_model(ser_model1)
调用上的段错误而崩溃:

2024-04-06 21:32:07.768352: I external/local_tsl/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2024-04-06 21:32:07.773084: I external/local_tsl/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2024-04-06 21:32:07.834125: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-04-06 21:32:09.108273: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT

*** caught segfault ***
address (nil), cause 'memory not mapped'

Traceback:
1: conditionMessage_from_py_exception(c)
2: conditionMessage.python.builtin.BaseException(errorValue)
3: conditionMessage(errorValue)
4: sprintf("task %d failed - \"%s\"", errorIndex, conditionMessage(errorValue))
5: e$fun(obj, substitute(ex), parent.frame(), e$data)
6: Redacted foreach statement
7: calling_my_function_above()
8: perform_model(inputs)
An irrecoverable exception occurred. R is aborting now ...
Segmentation fault (core dumped)

这是我的会话信息:

R version 4.3.3 (2024-02-29)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 22.04.1 LTS

Matrix products: default
BLAS:   /usr/lib/x86_64-linux-gnu/blas/libblas.so.3.10.0
LAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.10.0

locale:
 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C
 [3] LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8
 [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8
 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C
 [9] LC_ADDRESS=C               LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C

time zone: UTC
tzcode source: system (glibc)

attached base packages:
[1] stats     graphics  grDevices datasets  utils     methods   base

other attached packages:
[1] dplyr_1.1.4  rjson_0.2.21 hash_2.2.6.3 DBI_1.2.2    odbc_1.4.2

loaded via a namespace (and not attached):
 [1] utf8_1.2.4       R6_2.5.1         tidyselect_1.2.1 bit_4.0.5
 [5] magrittr_2.0.3   glue_1.7.0       bspm_0.5.5.1     blob_1.2.4
 [9] tibble_3.2.1     pkgconfig_2.0.3  generics_0.1.3   bit64_4.0.5
[13] lifecycle_1.0.4  cli_3.6.2        fansi_1.0.6      vctrs_0.6.5
 hms_1.1.3        pillar_1.9.0     Rcpp_1.0.12
[21] rlang_1.1.3

如上所述,删除

foreach()
似乎可以让代码继续前进。我已将线程数从 8 个更改为 2 个。并且我尝试尽可能地减少代码。问题似乎出在对
my_model1
的呼吁上。如果我注释掉它(并保留其他
unserialize_model()
调用),代码将继续执行而不会导致段错误。

也许“找不到 TensorRT”警告是一个问题,但由于其他调用工作没有问题,所以我必须相信这不是问题。 (是吗?)

如何了解导致崩溃的

ser_model1
的有趣之处?为什么调用堆栈中显示的“任务失败”消息从未被打印?似乎它会提供一些见解。当 R 导致段错误并且涉及许多其他库和依赖项时,我该如何调试 R?

r keras doparallel parallel-foreach
1个回答
0
投票

使用 TensorFlow 分叉进程并不安全。 TensorFlow 维护自己的线程池,尝试分叉主进程将导致段错误。无论您使用的是 Python 还是 R 接口,都是如此。

因此,在 R 中使用任何分叉当前 R 进程的并行化方法都行不通。这包括

foreach
mclapply
等。

© www.soinside.com 2019 - 2024. All rights reserved.