我是 Pyspark 的新手,所以只是边学习边学习。
我正在尝试使用 UnitTest 进行实验,但出现以下错误:
def drop_duplicates(df):
df = df.dropDuplicates(df)
return df
import unittest
class TestNotebook(unittest.TestCase):
def test_drop_duplicates(self):
data = (
['1', '2020-01-01 00:00:00', '2020-01-01 01:00:00', '2', '1'],
['1', '2020-01-01 00:00:00', '2020-01-01 01:00:00', '3', '1'],
['1', '2020-01-01 00:00:00', '2020-01-01 01:00:00', '2', '2'],
['2', '2020-01-01 00:00:00', '2020-01-01 01:00:00', '2', '1']
)
columns = ["ID", "TimeFrom", "TimeTo", "Serial", "Code"]
df = spark.createDataFrame(data, columns)
expected_data = [
('1', '2020-01-01 00:00:00', '2020-01-01 01:00:00', '2', '1'),
('1', '2020-01-01 00:00:00', '2020-01-01 01:00:00', '2', '2')
]
self.assertEqual(drop_duplicates(df), expected_data)
res = unittest.main(argv=[''], verbosity=2, exit=False)
(断言可能会失败,但我会知道何时克服此错误)但现在我只收到以下错误:
File "/tmp/ipykernel_15937/2907449366.py", line 2, in drop_duplicates
df = df.dropDuplicates(df)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/spark/python/pyspark/sql/dataframe.py", line 4207, in dropDuplicates
raise PySparkTypeError(
pyspark.errors.exceptions.base.PySparkTypeError: [NOT_LIST_OR_TUPLE] Argument `subset` should be a list or tuple, got DataFrame.
我有什么遗漏的吗?我正在阅读有关此方法的docs,但似乎无法弄清楚。