我知道我问过类似的问题here,但这是用于行过滤的。这次,我尝试删除列。我试图实现诸如FILTER
等高阶函数有一段时间,但无法使其正常工作。我认为我需要的是SELECT
高阶函数,但似乎不存在。谢谢您的帮助!
我正在使用pyspark,并且有一个数据框对象df
,这是df.printSchema()
的输出内容
root
|-- M_MRN: string (nullable = true)
|-- measurements: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- Observation_ID: string (nullable = true)
| | |-- Observation_Name: string (nullable = true)
| | |-- Observation_Result: string (nullable = true)
我只想在“测量”中保留“ Observation_ID”或“ Observation_Result”列。因此,当我运行df.select('measurements').take(2)
时,我得到
[Row(measurements=[Row(Observation_ID='5', Observation_Name='ABC', Observation_Result='108/72'),
Row(Observation_ID='11', Observation_Name='ABC', Observation_Result='70'),
Row(Observation_ID='10', Observation_Name='ABC', Observation_Result='73.029'),
Row(Observation_ID='14', Observation_Name='XYZ', Observation_Result='23.1')]),
Row(measurements=[Row(Observation_ID='2', Observation_Name='ZZZ', Observation_Result='3/4'),
Row(Observation_ID='5', Observation_Name='ABC', Observation_Result='7')])]
在执行上述过滤并运行df.select('measurements').take(2)
之后,我希望得到
[Row(measurements=[Row(Observation_ID='5', Observation_Result='108/72'),
Row(Observation_ID='11', Observation_Result='70'),
Row(Observation_ID='10', Observation_Result='73.029'),
Row(Observation_ID='14', Observation_Result='23.1')]),
Row(measurements=[Row(Observation_ID='2', Observation_Result='3/4'),
Row(Observation_ID='5', Observation_Result='7')])]
是否可以在pyspark中执行此操作?谢谢您的帮助!
您可以使用transform
到select
所需的字段,并将它们放在struct
中。
from pyspark.sql import functions as F
df.withColumn("measurements",F.expr("""transform(measurements\
,x-> struct(x.Observation_ID as Observation_ID,\
x.Observation_Result as Observation_Result))""")).printSchema()
#root
#|-- measurements: array (nullable = true)
#| |-- element: struct (containsNull = false)
#| | |-- Observation_ID: string (nullable = true)
#| | |-- Observation_Result: string (nullable = true)