使用Sparklyr查找每个组的动态间隔

问题描述 投票:1回答:1

我有一个巨大的(~100亿行)data.frame看起来有点像这样:

data <- data.frame(Person = c(rep("John", 9), rep("Steve", 7), rep("Jane", 4)),
Year = c(1900:1908, 1902:1908, 1905:1908),
Grade = c(c(6,3,4,4,8,5,2,9,7), c(4,3,5,5,6,4,7), c(3,7,2,9)) )

这是一组3人,在不同的年份观察,我们有他们的年度成绩。我想创建一个变量,对于每个等级,返回“简化等级”。简化等级只是不同间隔的等级切割。难点在于人的间隔是不同的。要获得Person的间隔阈值,我有以下列表:

list.threshold <- list(John = c(5,7), Steve = 4, Jane = c(3,5,8))

因此,史蒂夫的成绩将在2个时间间隔内切割,而Jane则以4个间隔进行切割。以下是想要的结果(SimpleGrade):

    Person  Year  Grade  SimpleGrade
1:   John   1900    6        1
2:   John   1901    3        0
3:   John   1902    4        0
4:   John   1903    4        0
5:   John   1904    8        2
6:   John   1905    5        1
7:   John   1906    2        0
8:   John   1907    9        2
9:   John   1908    7        2
10:  Steve  1902    4        1
11:  Steve  1903    3        0
12:  Steve  1904    5        1
13:  Steve  1905    5        1
14:  Steve  1906    6        1
15:  Steve  1907    4        1
16:  Steve  1908    7        1
17:  Jane   1905    3        1
18:  Jane   1906    7        2
19:  Jane   1907    2        0
20:  Jane   1908    9        3

我将不得不在sparklyr找到一个解决方案,因为我正在使用一个巨大的火花桌。

在dplyr我会做这样的事情:

dplyr

data <- group_by(data, Person) %>% 
mutate(SimpleGrade = cut(Grade, breaks = c(-Inf, list.threshold[[unique(Person)]], Inf), labels = FALSE, right = TRUE, include.lowest = TRUE) - 1)

它有效,但由于每个人的阈值不同,我无法在sparklyr中转换此解决方案。我想我将不得不使用ft_bucketizer功能。我到目前为止哪里有闪闪发光的:

sparklyr

spark_tbl <- group_by(spark_tbl, Person) %>%
ft_bucketizer(input_col  = "Grade",
            output_col = "SimpleGrade",
            splits     = c(-Inf, list.threshold[["John"]], Inf))

spark_tbl只是相当于数据的spark表。如果我不更改阈值并且仅使用John的阈值,则它可以工作。

非常感谢Tom C.

r apache-spark dplyr sparklyr
1个回答
0
投票

Spark ML Bucketizer只能用于全局操作,因此它不适合您。相反,您可以创建一个参考表

ref <- purrr::map2(names(list.threshold), 
   list.threshold, 
   function(name, brks) purrr::map2(
     c("-Infinity", brks), c(brks, "Infinity"),
     function(low, high) list(
       name = name, 
       low = low,
       high = high))) %>%
   purrr::flatten() %>% 
   bind_rows() %>% 
   group_by(name) %>%
   arrange(low, .by_group = TRUE) %>%
   mutate(simple_grade = row_number() - 1) %>%
   copy_to(sc, .) %>%
   mutate_at(vars(one_of("low", "high")), as.numeric)
# Source: spark<?> [?? x 4]
  name    low  high simple_grade
  <chr> <dbl> <dbl>        <dbl>
1 Jane   -Inf     3            0
2 Jane      3     5            1
3 Jane      5     8            2
4 Jane      8   Inf            3
5 John   -Inf     5            0
6 John      5     7            1
7 John      7   Inf            2
8 Steve  -Inf     4            0
9 Steve     4   Inf            1

然后left_join它与数据表:

sdf <- copy_to(sc, data)

simplified <- left_join(sdf, ref, by=c("Person" = "name")) %>%
  filter(Grade >= low & Grade < High) %>%
  select(-low, -high)
simplified
# Source: spark<?> [?? x 4]
   Person  Year Grade simple_grade
   <chr>  <int> <dbl>        <dbl>
 1 John    1900     6            1
 2 John    1901     3            0
 3 John    1902     4            0
 4 John    1903     4            0
 5 John    1904     8            2
 6 John    1905     5            1
 7 John    1906     2            0
 8 John    1907     9            2
 9 John    1908     7            2
10 Steve   1902     4            1
# … with more rows
simplified %>% dbplyr::remote_query_plan()
== Physical Plan ==
*(2) Project [Person#132, Year#133, Grade#134, simple_grade#15]
+- *(2) BroadcastHashJoin [Person#132], [name#12], Inner, BuildRight, ((Grade#134 >= low#445) && (Grade#134 < high#446))
   :- *(2) Filter (isnotnull(Grade#134) && isnotnull(Person#132))
   :  +- InMemoryTableScan [Person#132, Year#133, Grade#134], [isnotnull(Grade#134), isnotnull(Person#132)]
   :        +- InMemoryRelation [Person#132, Year#133, Grade#134], StorageLevel(disk, memory, deserialized, 1 replicas)
   :              +- Scan ExistingRDD[Person#132,Year#133,Grade#134]
   +- BroadcastExchange HashedRelationBroadcastMode(List(input[0, string, true]))
      +- *(1) Project [name#12, cast(low#13 as double) AS low#445, cast(high#14 as double) AS high#446, simple_grade#15]
         +- *(1) Filter ((isnotnull(name#12) && isnotnull(cast(high#14 as double))) && isnotnull(cast(low#13 as double)))
            +- InMemoryTableScan [high#14, low#13, name#12, simple_grade#15], [isnotnull(name#12), isnotnull(cast(high#14 as double)), isnotnull(cast(low#13 as double))]
                  +- InMemoryRelation [name#12, low#13, high#14, simple_grade#15], StorageLevel(disk, memory, deserialized, 1 replicas)
                        +- Scan ExistingRDD[name#12,low#13,high#14,simple_grade#15]
© www.soinside.com 2019 - 2024. All rights reserved.