如何将 PySpark 数据帧保存到 parquet 文件

问题描述 投票:0回答:1

我刚刚安装了 PySpark,因为我不需要 Hadoop,PySpark 文档中没有推荐它。所有的人安装 Hadoop 都是为了在本地机器上保存 parquet 吗?

我的代码:

from datetime import datetime, date
import pandas as pd
from pyspark.sql import Row, SparkSession

spark = SparkSession.builder.appName("Example").getOrCreate()
df = spark.createDataFrame([
      Row(a=1, b=2., c='string1', d=date(2000, 1, 1), e=datetime(2000, 1, 1, 12, 0)),
      Row(a=2, b=3., c='string2', d=date(2000, 2, 1), e=datetime(2000, 1, 2, 12, 0)),
      Row(a=4, b=5., c='string3', d=date(2000, 3, 1), e=datetime(2000, 1, 3, 12, 0))
])
df.write.parquet('bar.parquet') # error 

我的操作系统:

  • Windows 11
  • Python 3.11.8
  • PySpark 3.5.0
  • 笔记本 VS Code

我的错误:

---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
Cell In[1], line 12
      5 spark = SparkSession.builder.appName("Example").getOrCreate()
      6 df = spark.createDataFrame([
      7     Row(a=1, b=2., c='string1', d=date(2000, 1, 1), e=datetime(2000, 1, 1, 12, 0)),
      8     Row(a=2, b=3., c='string2', d=date(2000, 2, 1), e=datetime(2000, 1, 2, 12, 0)),
      9     Row(a=4, b=5., c='string3', d=date(2000, 3, 1), e=datetime(2000, 1, 3, 12, 0))
     10 ])
---> 12 df.write.parquet('bar.parquet')

File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\pyspark\sql\readwriter.py:1721, in DataFrameWriter.parquet(self, path, mode, partitionBy, compression)
   1719     self.partitionBy(partitionBy)
   1720 self._set_opts(compression=compression)
-> 1721 self._jwrite.parquet(path)

File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\py4j\java_gateway.py:1322, in JavaMember.__call__(self, *args)
   1316 command = proto.CALL_COMMAND_NAME +\
   1317     self.command_header +\
   1318     args_command +\
   1319     proto.END_COMMAND_PART
   1321 answer = self.gateway_client.send_command(command)
-> 1322 return_value = get_return_value(
   1323     answer, self.gateway_client, self.target_id, self.name)
   1325 for temp_arg in temp_args:
   1326     if hasattr(temp_arg, "_detach"):

File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\pyspark\errors\exceptions\captured.py:179, in capture_sql_exception.<locals>.deco(*a, **kw)
    177 def deco(*a: Any, **kw: Any) -> Any:
    178     try:
--> 179         return f(*a, **kw)
    180     except Py4JJavaError as e:
    181         converted = convert_exception(e.java_exception)

File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\py4j\protocol.py:326, in get_return_value(answer, gateway_client, target_id, name)
    324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
    325 if answer[1] == REFERENCE_TYPE:
--> 326     raise Py4JJavaError(
    327         "An error occurred while calling {0}{1}{2}.\n".
    328         format(target_id, ".", name), value)
    329 else:
    330     raise Py4JError(
    331         "An error occurred while calling {0}{1}{2}. Trace:\n{3}\n".
    332         format(target_id, ".", name, value))

Py4JJavaError: An error occurred while calling o43.parquet.
: java.lang.RuntimeException: java.io.FileNotFoundException: java.io.FileNotFoundException: HADOOP_HOME and hadoop.home.dir are unset. -see https://wiki.apache.org/hadoop/WindowsProblems
python python-3.x apache-spark hadoop pyspark
1个回答
0
投票

不,您不需要 Hadoop 来保存 parquet 文件。

一种方法是使用

pandas
数据框并直接写入:

df.to_parquet('bar.parquet')

另一种是使用

PyArrow

检查这些答案以获取详细说明:

没有 Hadoop 的 Parquet?

如何在Windows中查看Apache Parquet文件?

© www.soinside.com 2019 - 2024. All rights reserved.