根据特定条件修改Spark数据帧中的列

问题描述 投票:0回答:1

我想在客户端应用某些公式后将clientIPInt(以Int格式,duh!)转换为单独的列。

Sample Input: df_A
    +----+------------------------+
    |num |clientIPInt             |
    +----+------------------------+
    |1275|200272593               |
    |145 |200172593               |
    |2678|200274543               |
    |6578|200272593               |
    |1001|200272593               |
    +----+------------------------+
Output:

+----+------------------------++---------------+
|num |clientIPInt             |ip64bigint      |
+----+------------------------+----------------+
|1275|200272593               |3521834763      |
|145 |0                       |0               |
|2678|200272593               |3521834763      |
|6578|200272593               |3521834763      |
|1001|200272593               |3521834763      |         
+----+------------------------+----------------+

我创建了一个udf进行转换。以下是我的尝试。

val b = df_A.withColumn("ip64bigint", ipToLong(df_A.col("clientIpInt")))
val ipToLong = udf{(ipInt: Int) =>
    val i = {
      if (ipInt <= 0) ipInt.toLong + 4294967296L
      else ipInt.toLong
    }
    val b = ((i & 255L) * 16777216L) + ((i & 65280L) * 256L) + ((i & 16711680L) / 256L) + ((i / 16777216L) & 255L)
    b
  }

然而,这个udf不是那么高效。

接下来我尝试使用列函数,但下面的代码不起作用

val d = df_A.withColumn("ip64bigint", newCol($"clientIpInt"))
def newCol(col: Column): Column = {
    when(col <= 0, ((((col.toLong + + 4294967296L) & 255L) * 16777216L) + (((col.toLong + + 4294967296L) & 65280L) * 256L) + (((col.toLong + + 4294967296L) & 16711680L) / 256L) + (((col.toLong + + 4294967296L) / 16777216L) & 255L))).
      otherwise(((col & 255L) * 16777216L) + ((col & 65280L) * 256L) + ((col & 16711680L) / 256L) + ((col / 16777216L) & 255L))
  }

我真的不想将数据帧df_A转换为Dataset [case class of columns],因为我在数据帧中有超过140列。

任何想法我对列函数或任何其他转换数据的方式做错了什么

scala apache-spark dataframe apache-spark-sql transformation
1个回答
1
投票

以下是一个有效的解决方案:

示例dataframe =>

import org.apache.spark.sql.Row
import org.apache.spark.sql.types._
val data =
  Seq(
    Row(1275, 200272593),
    Row(145, 0),
    Row(2678, 200274543),
    Row(6578, 200272593),
    Row(1001, 200272593))

val dF = spark.createDataFrame(spark.sparkContext.parallelize(data),
  StructType(List(StructField("num", IntegerType, nullable = true),
    StructField("clientIPInt", IntegerType, nullable = true))))
+----+-----------+
| num|clientIPInt|
+----+-----------+
|1275|  200272593|
| 145|          0|
|2678|  200274543|
|6578|  200272593|
|1001|  200272593|
+----+-----------+

使用spark提供的函数=>

import spark.implicits._
import org.apache.spark.sql.functions._
dF.withColumn("i", when('clientIPInt <= 0, ('clientIPInt cast "long") + 4294967296L).otherwise('clientIPInt cast "long"))
    .withColumn("ip64bigint", (('i.bitwiseAND(255L) * 16777216L) + ('i.bitwiseAND(65280L) * 256L) + ('i.bitwiseAND(16711680L) / 256L) + ('i / 16777216L).cast("long").bitwiseAND(255L)) cast "long")
       .drop("i").show(false)

输出=>

+----+-----------+----------+
|num |clientIPInt|ip64bigint|
+----+-----------+----------+
|1275|200272593  |3521834763|
|145 |0          |0         |
|2678|200274543  |1878191883|
|6578|200272593  |3521834763|
|1001|200272593  |3521834763|
+----+-----------+----------+
© www.soinside.com 2019 - 2024. All rights reserved.