Pyspark SQL选择数据,其中列为空[重复]

问题描述 投票:0回答:1

如何仅选择某些列中在pyspark中具有NULL值的行。

设置

import numpy as np
import pandas as pd


# pyspark
import pyspark
from pyspark.sql import functions as F 
from pyspark.sql.types import *
from pyspark import SparkConf, SparkContext, SQLContext


spark = pyspark.sql.SparkSession.builder.appName('app').getOrCreate()
sc = spark.sparkContext
sqlContext = SQLContext(sc)
sc.setLogLevel("INFO")


# data
dft = pd.DataFrame({
    'Code': [1, 2, 3, 4, 5, 6],
    'Name': ['Odeon', 'Imperial', 'Majestic',
             'Royale', 'Paraiso', 'Nickelodeon'],
    'Movie': [5.0, 1.0, np.nan, 6.0, 3.0, np.nan]})


schema = StructType([
    StructField('Code',IntegerType(),True),
    StructField('Name',StringType(),True),
    StructField('Movie',FloatType(),True),

    ])

sdft = sqlContext.createDataFrame(dft, schema)
sdft.createOrReplaceTempView("MovieTheaters")
sdft.show()

我的尝试

spark.sql("""
select * from MovieTheaters where Movie is null
""").show()

+----+----+-----+
|Code|Name|Movie|
+----+----+-----+
+----+----+-----+

我得到EMPTY输出,如何解决该问题?

预期输出:

+----+-----------+-----+
|Code|       Name|Movie|
+----+-----------+-----+
|   3|   Majestic|  NaN|
|   6|Nickelodeon|  NaN|
+----+-----------+-----+
python pyspark pyspark-sql
1个回答
1
投票

如果要从数据框中获取np.nan值,请使用以下代码:

© www.soinside.com 2019 - 2024. All rights reserved.