sparklyr无法过滤单个值上`sd`的缺失值

问题描述 投票:1回答:1

sd()应用于spark数据框中的单个值(通过R中的sparklyr包)会导致缺失值,根据它是缺失值而无法过滤掉。

有人能解释一下/提供一个好的解决方案吗?

以下示例。

library(sparklyr)
library(dplyr)

sc <- spark_connect(master = "local")
#> * Using Spark: 2.1.0

x <- data.frame(grp = c("a", "a", "c"), x = c(1, 2, 3))

copy_to(sc, x, "tmp", overwrite = TRUE)
#> # Source:   table<tmp> [?? x 2]
#> # Database: spark_connection
#>     grp     x
#>   <chr> <dbl>
#> 1     a     1
#> 2     a     2
#> 3     c     3

x_tbl <- tbl(sc, "tmp") %>% group_by(grp) %>% mutate(x_sd = sd(x))

x_tbl
#> # Source:   lazy query [?? x 3]
#> # Database: spark_connection
#> # Groups:   grp
#>     grp     x      x_sd
#>   <chr> <dbl>     <dbl>
#> 1     a     1 0.7071068
#> 2     a     2 0.7071068
#> 3     c     3       NaN

x_tbl %>% filter(!is.na(x_sd)) %>% collect()
#> # A tibble: 3 x 3
#> # Groups:   grp [2]
#>     grp     x      x_sd
#>   <chr> <dbl>     <dbl>
#> 1     a     1 0.7071068
#> 2     a     2 0.7071068
#> 3     c     3       NaN
r apache-spark dplyr sparklyr
1个回答
2
投票

这是sparklyr和Spark之间不兼容的问题。在Spark中,有NULLS(有点相当于R NA)和NaNs,每个都有不同的处理规则,但这两个值都被取为NaN中的sparklyr

要过滤掉NaN,你必须使用isnan(不要将它与R is.nan混淆):

x_tbl %>% filter(!isnan(x_sd)) %>% collect()
# A tibble: 2 x 3
# Groups:   grp [1]
    grp     x      x_sd
  <chr> <dbl>     <dbl>
1     a     1 0.7071068
2     a     2 0.7071068

为了更好地说明问题:

df <- copy_to(sc,
  data.frame(x = c("1", "NaN", "")), "df", overwrite = TRUE
) %>% mutate(x = as.double(x))

df %>% mutate_all(funs(isnull, isnan)) 
# Source:   lazy query [?? x 3]
# Database: spark_connection
      x isnull isnan
  <dbl>  <lgl> <lgl> 
1     1  FALSE FALSE
2   NaN  FALSE  TRUE
3   NaN   TRUE FALSE
© www.soinside.com 2019 - 2024. All rights reserved.