我可以通过hadoop / google dataproc上的sqoop作业运行Postgresql表的完整导出,以导出到google存储桶。但是,当我尝试增量导出时,它会失败。
gcloud dataproc jobs submit hadoop \
--cluster="$CLUSTER_NAME" \
--class=org.apache.sqoop.Sqoop \
--properties=mapreduce.job.classloader=true \
--jars="$UBER_JAR" \
--region="$CLUSTER_REGION" \
-- job --create "$job_name" \
-- import \
--connect="${CONNECTION_STRING}" \
--username="${SOURCE_USER}" \
--password="${SOURCE_PASSWORD}" \
--target-dir="gs://$WAREHOUSE_BUCKET_NAME/${EXPORT_DIRNAME}/${job_name}" \
--table="$table_name" \
--as-avrodatafile \
--incremental=append \
--split-by="${split_by}" \
--check-column created \
--last-value "2017-01-01 00:00:00.000000" \
--verbose
日志表明它能够导出数据,但是google存储桶中没有任何内容。我看到警告“ util.AppendUtils:无法将文件追加到目标目录;没有这样的目录”:
...
20/03/13 20:52:18 INFO mapreduce.ImportJobBase: Transferred 4.6844 MB in 15.9306 seconds (301.106 KB/sec)
20/03/13 20:52:18 INFO mapreduce.ImportJobBase: Retrieved 27783 records.
20/03/13 20:52:18 DEBUG util.ClassLoaderStack: Restoring classloader: sun.misc.Launcher$AppClassLoader@7dc36524
20/03/13 20:52:18 WARN util.AppendUtils: Cannot append files to target dir; no such directory: _sqoop/df1bc552c9754b5aa2db3a6c04b03a75_insights_action
20/03/13 20:52:18 INFO tool.ImportTool: Incremental import complete! To run another incremental import of all data following this import, supply the following arguments:
20/03/13 20:52:18 INFO tool.ImportTool: --incremental append
20/03/13 20:52:18 INFO tool.ImportTool: --check-column created
20/03/13 20:52:18 INFO tool.ImportTool: --last-value 2020-03-13 14:54:01.997784
20/03/13 20:52:18 INFO tool.ImportTool: (Consider saving this with 'sqoop job --create')
Job [1673b419f6c042d18dd8124f06e9c412] finished successfully.
知道是否有解决方法吗?
This is不仅是警告,不是错误,并且不应导致Sqoop作业失败:
// This occurs if there was no source (tmp) dir. This might happen
// if the import was an HBase-target import, but the user specified
// --append anyway. This is a warning, not an error.