如何使用spark-java api的MapFunction和ReduceFunction在集群上并行处理?

问题描述 投票:-5回答:1

我使用spark-sql-2.4.1v和java8。

必须使用group by对各种条件进行计算,使用 java api,即使用MapFunction和ReduceFunction。

场景。

有如下示例的源数据

+--------+--------------+-----------+-------------+---------+------+
| country|generated_date|industry_id|industry_name|  revenue| state|
+--------+--------------+-----------+-------------+---------+------+
|Country1|    2020-03-01|    Indus_1| Indus_1_Name| 12789979|State1|
|Country1|    2019-06-01|    Indus_1| Indus_1_Name| 56189008|State1|
|Country1|    2019-03-01|    Indus_1| Indus_1_Name| 12789979|State1|
|Country1|    2020-03-01|    Indus_2| Indus_2_Name| 21789933|State2|
|Country1|    2018-03-01|    Indus_2| Indus_2_Name|300789933|State2|
|Country1|    2019-03-01|    Indus_3| Indus_3_Name| 27989978|State3|
|Country1|    2017-06-01|    Indus_3| Indus_3_Name| 56189008|State3|
|Country1|    2017-03-01|    Indus_3| Indus_3_Name| 30014633|State3|
|Country2|    2020-03-01|    Indus_4| Indus_4_Name| 41789978|State1|
|Country2|    2018-03-01|    Indus_4| Indus_4_Name| 56189008|State1|
|Country3|    2019-03-01|    Indus_5| Indus_5_Name| 37899790|State3|
|Country3|    2018-03-01|    Indus_5| Indus_5_Name| 56189008|State3|
|Country3|    2017-03-01|    Indus_5| Indus_5_Name| 67789978|State3|
|Country1|    2020-03-01|    Indus_6| Indus_6_Name| 12789979|State1|
|Country1|    2020-06-01|    Indus_6| Indus_6_Name| 37899790|State1|
|Country1|    2018-03-01|    Indus_6| Indus_6_Name| 56189008|State1|
|Country3|    2020-03-01|    Indus_7| Indus_7_Name| 26689900|State1|
|Country3|    2020-12-01|    Indus_7| Indus_7_Name|212359979|State1|
|Country3|    2019-03-01|    Indus_7| Indus_7_Name| 12789979|State1|
|Country1|    2018-03-01|    Indus_8| Indus_8_Name|212359979|State2|
+--------+--------------+-----------+-------------+---------+------+

需要计算各种计算,如avg(revenue)为每个给定的组给定的日期,能够做到这一点,但在spark-cluster中不缩放。

对于同样的事情,我正在做下面的事情,但这不是在所有的缩放...因此理解我需要使用MapFunction和ReduceFunction的java...不知道如何做到这一点?

//Will get dates to for which I need to calculate , this provided by external source 
        List<String> datesToCalculate = Arrays.asList("2019-03-01","2020-06-01","2018-09-01");

        //Will get groups  to calculate , this provided by external source ..will keep changing
        //Have around 100s of groups.
        List<String> groupsToCalculate = Arrays.asList("Country","Country-State");

        //For each data given need to calculate avg(revenue) for each given group 
        //for those given each date of datesToCalculate for those records whose are later than given date.
        //i.e. 

        //Now I am doing some thing like this..but it is not scaling

        datesToCalculate.stream().forEach( cal_date -> {

            Dataset<IndustryRevenue> calc_ds = ds.where(col("generated_date").gt(lit(cal_date)));

            //this keep changing for each cal_date
            Dataset<Row> final_ds = calc_ds
                                      .withColumn("calc_date", to_date(lit(cal_date)).cast(DataTypes.DateType));

            //for each group it calcuate separate set
            groupsToCalculate.stream().forEach( group -> {

                String tempViewName = new String("view_" + cal_date + "_" + group);

                final_ds.createOrReplaceTempView(tempViewName);

                String query = "select "  
                                  + " avg(revenue) as mean, "
                                  + "from " + tempViewName                      
                                  + " group by " + group;

                System.out.println("query : " + query);
                Dataset<Row> resultDs  = spark.sql(query);

                Dataset<Row> finalResultDs  =  resultDs
                                 .withColumn("calc_date", to_date(lit(cal_date)).cast(DataTypes.DateType))
                                 .withColumn("group", to_date(lit(group)).cast(DataTypes.DateType));


                //Writing to each group for each date is taking hell lot of time.
                // For each record it is save at a time
                // want to move out unioning all finalResultDs and write in batches
                finalResultDs
                   .write().format("parquet")
                   .mode("append")
                   .save("/tmp/"+ tempViewName);

                spark.catalog().dropTempView(tempViewName);

            });

        });

由于for-loops,处理几百万条记录需要20多小时,所以如何避免forloops,使其快速运行。

下面是示例代码

https:/github.comBdLearnerrJava-mapReduceblobmasterMapReduceScalingProblem.java。

预期的输出。

+--------------+----------------+--------------+
| group-name   |   group-value  |         mean |
+--------------+----------------+--------------+
|country-state |Country1-State1 | 2.53448845E7 |
|country-state |Country3-State3 |   6.7789978E7|
|country-state |Country1-State2 | 1.919319606E8|
|country-state |Country4-State1 |    9.789979E7|
|country-state |Country1-State3 |   2.9339748E7|
|country-state |Country3-State1 |     2.66899E7|
|country-state |Country2-State1 |   4.1789978E7|
|country       |Country4        |    9.789979E7|
|country       |Country1        |   8.5696311E7|
|country       |Country3        |   4.7239939E7|
|country       |Country2        |   4.1789978E7|
+--------------+----------------+--------------+
java dataframe apache-spark apache-spark-sql apache-spark-dataset
1个回答
1
投票

这里是我认为解决你眼前问题的部分方案,但我也留下了一些方面让你填写。还有其他的方法,但这是我根据我的理解快速采取的方法。成功。没有foreach req'd。我可能对你需要的东西有错误的理解。如果是这样的话,请原谅。你可能想在这种方法中考虑.cache。

// Assuming constant names in terms of country names are spelled similarly and consistently
// Not clear if by date or for selected dates. If selected dates then use another list 
// This approach will scale due to JOIN and AGG and no foreach, etc.
// Spark will fuse the code together if it can, but there are shuffles

// This is for Country, State. You can apply the approach to just Country and then UNION the 2 DF's with common names and definitions. Try it out
// NB: You make a custom grouping by concatenating the Country & State or you can leave as is, and for 2nd query you can just fill in country and put a blank value into the State.
// I leave that up to you.

import spark.implicits._

import org.apache.spark.sql.functions._
val dfC = Seq(("USA", "Ohio"), ("NZ", "Otago")).toDF("sCountry", "sState") // Your search criteria at Country / State level, cannot so simple .isin - why?

val d = List("23-10-2001", "12-12-2003") // or Array

val dfS = Seq(
             ("USA", "Ohio", "23-10-2001", 2),
             ("USA", "Ohio", "23-10-2001", 2),
             ("USA", "Ohio", "23-10-2011", 2),
             ("USA", "Texas", "23-10-2001", 2),
             ("USA", "Virgina", "23-10-2001", 10),
             ("USA", "Virgina", "23-10-2001", 6),
             ("USA", "vanDiemensLand", "23-10-2001", 26),
             ("NL", "vanDiemensLand", "23-10-2001", 16),
             ("UK", "Middlesex", "23-10-2001", 3)
             ).toDF("country", "state", "date", "some_val") 
dfS.show(false)

// 1. For Country & State 
// JOIN acts as a filter as is inner join and alleviates the .isin for multiple cols i.e. Country||State
val df1 = dfS.join(dfC, (dfS("country") === dfC("sCountry")) && (dfS("state") === dfC("sState"))).drop("sCountry").drop("sState")
df1.show(false)

val df2 = df1.filter($"date".isin(d:_*)).groupBy("country", "state").avg("some_val") 
df2.show(false)

// 2. For Country only
... to fill in by you
...

// 3. UNION df2 & df3
...

// 4. Save with partitioning.
© www.soinside.com 2019 - 2024. All rights reserved.