当尝试使用Dask读取hdfs中的文件时,这个“pyarrow.lib.ArrowIOError:HDFS文件不存在”的解释是什么?

问题描述 投票:0回答:2

我正在使用Dask Distributed,我正在尝试从存储在HDFS中的CSV创建数据帧。我想与HDFS的连接是成功的,因为我能够打印数据帧列的名称。但是,当我尝试在数据帧上使用len函数或任何其他函数时,我收到以下错误:

pyarrow.lib.ArrowIOError: HDFS file does not exist: /user/F43479/trip_data_v2.csv

我不明白为什么我有这个错误。我想听听你的意见。

这是我的代码:

# IMPORTS
import dask.dataframe as dd
from dask.distributed import Client
import pyarrow as pa
from pyarrow import csv
from dask import compute,config
import os
import subprocess

# GET HDFS CLASSPATH
classpath = subprocess.Popen(["/usr/hdp/current/hadoop-client/bin/hdfs", "classpath", "--glob"], stdout=subprocess.PIPE).communicate()[0]

# CONFIGURE ENVIRONMENT VARIABLES
os.environ["HADOOP_HOME"] = "/usr/hdp/current/hadoop-client"
os.environ["JAVA_HOME"] = "/home/G60070/installs/jdk1.8.0_201/"
os.environ["CLASSPATH"] = classpath.decode("utf-8")
os.environ["ARROW_LIBHDFS_DIR"] = "/usr/hdp/2.6.5.0-292/usr/lib/"

# LAUNCH DASK DISTRIBUTED
client = Client('10.22.104.37:8786')

# SET HDFS CONNEXION
config.set(hdfs_driver='pyarrow', host='xxxxx.xxx.xx.fr', port=8020)

# READ FILE ON HDFS
folder = 'hdfs://xxxxx.xxx.xx.fr:8020/user/F43479/'
filepath = folder+'trip_data_v2.csv'
df = dd.read_csv(filepath)

# TREATMENTS ON FILE
print(df.columns)# this works
print(len(df))# produces an error

这是我的HDFS存储库的内容:

[F43479@xxxxx dask_tests]$ hdfs dfs -ls /user/F43479/
Found 9 items
-rw-r-----   3 F43479 hdfs            0 2019-03-07 16:42 /user/F43479/-
drwx------   - F43479 hdfs            0 2019-04-03 02:00 /user/F43479/.Trash
drwxr-x---   - F43479 hdfs            0 2019-03-13 16:53 /user/F43479/.hiveJars
drwxr-x---   - F43479 hdfs            0 2019-03-13 16:52 /user/F43479/hive
drwxr-x---   - F43479 hdfs            0 2019-03-15 13:23 /user/F43479/nyctaxi_trip_data
-rw-r-----   3 F43479 hdfs           36 2019-04-15 11:13 /user/F43479/test.csv
-rw-r-----   3 F43479 hdfs  50486731416 2019-03-26 17:37 /user/F43479/trip_data.csv
-rw-r-----   3 F43479 hdfs   5097056230 2019-04-15 13:57 /user/F43479/trip_data_v2.csv
-rw-r-----   3 F43479 hdfs 504867312828 2019-04-02 11:15 /user/F43479/trip_data_x10.csv

最后,代码执行的完整结果:

Index(['vendor_id', 'passenger_count', 'trip_time_in_secs', 'trip_distance'], dtype='object')
Traceback (most recent call last):
  File "dask_pa_hdfs.py", line 32, in <module>
    print(len(df))
  File "/opt/anaconda3/envs/python3-dask/lib/python3.7/site-packages/dask/dataframe/core.py", line 438, in __len__
    split_every=False).compute()
  File "/opt/anaconda3/envs/python3-dask/lib/python3.7/site-packages/dask/base.py", line 156, in compute
    (result,) = compute(self, traverse=False, **kwargs)
  File "/opt/anaconda3/envs/python3-dask/lib/python3.7/site-packages/dask/base.py", line 397, in compute
    results = schedule(dsk, keys, **kwargs)
  File "/opt/anaconda3/envs/python3-dask/lib/python3.7/site-packages/distributed/client.py", line 2321, in get
    direct=direct)
  File "/opt/anaconda3/envs/python3-dask/lib/python3.7/site-packages/distributed/client.py", line 1655, in gather
    asynchronous=asynchronous)
  File "/opt/anaconda3/envs/python3-dask/lib/python3.7/site-packages/distributed/client.py", line 673, in sync
    return sync(self.loop, func, *args, **kwargs)
  File "/opt/anaconda3/envs/python3-dask/lib/python3.7/site-packages/distributed/utils.py", line 277, in sync
    six.reraise(*error[0])
  File "/opt/anaconda3/envs/python3-dask/lib/python3.7/site-packages/six.py", line 693, in reraise
    raise value
  File "/opt/anaconda3/envs/python3-dask/lib/python3.7/site-packages/distributed/utils.py", line 262, in f
    result[0] = yield future
  File "/opt/anaconda3/envs/python3-dask/lib/python3.7/site-packages/tornado/gen.py", line 1133, in run
    value = future.result()
  File "/opt/anaconda3/envs/python3-dask/lib/python3.7/site-packages/tornado/gen.py", line 1141, in run
    yielded = self.gen.throw(*exc_info)
  File "/opt/anaconda3/envs/python3-dask/lib/python3.7/site-packages/distributed/client.py", line 1500, in _gather
    traceback)
  File "/opt/anaconda3/envs/python3-dask/lib/python3.7/site-packages/six.py", line 692, in reraise
    raise value.with_traceback(tb)
  File "/opt/anaconda3/envs/python3-dask/lib/python3.7/site-packages/dask/bytes/core.py", line 133, in read_block_from_file
    with copy.copy(lazy_file) as f:
  File "/opt/anaconda3/envs/python3-dask/lib/python3.7/site-packages/dask/bytes/core.py", line 177, in __enter__
    f = SeekableFile(self.fs.open(self.path, mode=mode))
  File "/opt/anaconda3/envs/python3-dask/lib/python3.7/site-packages/dask/bytes/pyarrow.py", line 37, in open
    return self.fs.open(path, mode=mode, **kwargs)
  File "pyarrow/io-hdfs.pxi", line 431, in pyarrow.lib.HadoopFileSystem.open
  File "pyarrow/error.pxi", line 83, in pyarrow.lib.check_status
pyarrow.lib.ArrowIOError: HDFS file does not exist: /user/F43479/trip_data_v2.csv
python dask dask-distributed pyarrow
2个回答
0
投票

您已在本地进程中仔细设置了包含客户端的环境,以便它可以与HDFS进行通信。为了找到列,这就足够了,因为Dask从客户端进程和数据的前几行预先做到这一点。然而:

client = Client('10.22.104.37:8786')

您的调度程序和工作程序位于其他位置,并且没有您可用的环境变量。运行任务时,工作人员不知道如何查找文件。

您需要做的是为工人设置环境。这可以在它们启动之前完成,或者一旦启动就完成:

def setenv():
    import os
    os.environ["HADOOP_HOME"] = "/usr/hdp/current/hadoop-client"
    os.environ["JAVA_HOME"] = "/home/G60070/installs/jdk1.8.0_201/"
    os.environ["CLASSPATH"] = classpath.decode("utf-8")
    os.environ["ARROW_LIBHDFS_DIR"] = "/usr/hdp/2.6.5.0-292/usr/lib/"

client.run(setenv)

(应该从每个工人的一套None返回)

请注意,如果新工作人员动态联机,他们每个人都需要在访问HDFS之前运行此功能。


0
投票

我解决了这个问题。它与访问HDFS的权限有关。我正在研究Kerberised HDFS集群,我在边缘节点和数据节点上的工作进程上启动了Dask Scheduler进程。 要访问HDFS,pyarrow需要两件事:

  • 它必须安装在调度程序和所有工作程序上
  • 还需要在所有节点上配置环境变量

然后,要访问HDFS,需要通过Kerberos对启动的进程进行身份验证。当从调度程序进程中激活代码时,我能够连接到HDFS,因为我的会话是通过Kerberos进行身份验证的。这就是我能够获得有关CSV文件列的信息的原因。 但是,由于工作进程未经过身份验证,因此无法访问导致错误的HDFS。为了解决这个问题,我们不得不停止工作进程,修改用于启动它们的脚本,以便它包含一个kerberos命令来验证HDFS(kinit something),然后重新启动工作进程。 它现在可以工作,但这意味着Dask与Kerberised集群不兼容。使用我们所做的配置,所有用户在从工作人员启动计算时对HDFS具有相同的权限。我认为这不是一个完全安全的做法

© www.soinside.com 2019 - 2024. All rights reserved.