我正在尝试将数据框中的每一列除以每一列
df = spark.createDataFrame([(1, 2,3), (2, 4,6), (3, 6,9), (4, 8,12), (5, 10,15)], ["A", "B","C"])
所以:
列名称应为A_by_B
,A_by_C
等。>
我可以通过以下方式在python中执行此操作,但不确定在pyspark中如何工作
df_new = pd.concat([df[df.columns.difference([col])].div(df[col], axis=0)\
.add_suffix(f'_by_{col}') for col in df.columns], axis=1)
我正在尝试将数据帧中的每一列除以每一列df = spark.createDataFrame([(1,2,3),(2,4,6),(3,6,9),(4, 8,12),(5,10,15)],[“ A”,“ B”,“ C”])):A列应为...
df = spark.createDataFrame([(1, 2, 3), (2, 4, 6), (3, 6, 9)], ('A', 'B', 'C'))
df_1 = df.withColumn('A by B', df.A/df.B)
df_2 = df_1.withColumn('A by c', df.A/df.C)
df_3 = df_2.withColumn('B by A', df.B/df.A)
df_4 = df_3.withColumn('B by C', df.B/df.C)
df_4.show()
您可以像这样循环遍历DataFrame列以创建新列X_by_Y
: