在Hive中插入数据时,Pyspark错误

问题描述 投票:0回答:1

我正在使用pyspark代码。 。并且当我尝试将数据从pyspark插入HIVE Table时,出现错误。我尝试在Google上查找,但没有任何想法。

当我将这个插入语句直接运行到HIVE中时,它运行良好,但使用spark,会出现错误。我的插入语句看起来像这样:

INSERT INTO TABLE QA_Result VALUES( 'Table_100_columns_tiny', 's3a://rbspoc-sas/sas_100_columns_tiny.csv', 'default', 'Yes', '210', '210', '(COL51=32.1000000),(COL62=17.8000000),(COL7=71393.5482355),(COL47=21.3000000),(COL58=17.1000000),(COL39=29.7000000),(COL55=8.0000000),(COL49=40096.1000000),(COL8=-1782477.8622806),(COL66=21.2000000),(COL28=6.2920000),(COL31=4851.1877388),(COL17=5.2860000),(COL27=5.0800000),(COL42=5493.3000000),(COL6=-5707379.1906659),(COL20=3.6720000),(COL38=15.4000000),(COL32=4.8200000),(COL60=23.9000000),(COL63=23.5000000),(COL36=5.1340000),(COL25=5.5390000),(COL43=17.1000000),(COL57=21.1000000),(COL46=23.0000000),(COL52=26.0000000),(COL14=5.0780000),(COL16=5.5300000),(COL40=19.3000000),(COL45=22.9000000),(COL21=6.0570000),(COL15=4.7380000),(COL9=4.6110000),(COL10=4.1230000),(COL5=180.0000000),(COL13=6.0490000),(COL37=14.9000000),(COL24=5.5730000),(COL64=29.3000000),(COL35=4.9500000),(COL26=4.8420000),(COL19=5.3460000),(COL53=14.5000000),(COL56=16.6000000),(COL11=6.2100000),(COL50=43.2000000),(COL61=18.6000000),(COL44=22.4000000),(COL33=4.4690000),(COL29=2.3800000),(COL48=22.7000000),(COL22=3.9550000),(COL34=5.2160000),(COL18=3.4470000),(COL12=5.4570000),(COL59=31.7000000),(COL23=5.0200000),(COL41=15.6000000),(COL30=4.3820000),(COL54=19.3000000),(COL65=34.2000000)', '(COL51=32.1000000),(COL62=17.8000000),(COL7=71393.5482355),(COL47=21.3000000),(COL58=17.1000000),(COL39=29.7000000),(COL55=8.0000000),(COL49=40096.1000000),(COL8=-1782477.8622806),(COL66=21.2000000),(COL28=6.2920000),(COL31=4851.1877388),(COL17=5.2860000),(COL27=5.0800000),(COL42=5493.3000000),(COL6=-5712141.0954278),(COL20=3.6720000),(COL38=15.4000000),(COL32=4.8200000),(COL60=23.9000000),(COL63=23.5000000),(COL36=5.1340000),(COL25=5.5390000),(COL43=17.1000000),(COL57=21.1000000),(COL46=23.0000000),(COL52=26.0000000),(COL14=5.0780000),(COL16=5.5300000),(COL40=19.3000000),(COL45=22.9000000),(COL21=6.0570000),(COL15=4.7380000),(COL9=4.6110000),(COL10=4.1230000),(COL5=180.0000000),(COL13=6.0490000),(COL37=14.9000000),(COL24=5.5730000),(COL64=29.3000000),(COL35=4.9500000),(COL26=4.8420000),(COL19=5.3460000),(COL53=14.5000000),(COL56=16.6000000),(COL11=6.2100000),(COL50=43.2000000),(COL61=18.6000000),(COL44=22.4000000),(COL33=4.4690000),(COL29=2.3800000),(COL48=22.7000000),(COL22=3.9550000),(COL34=5.2160000),(COL18=3.4470000),(COL12=5.4570000),(COL59=31.7000000),(COL23=5.0200000),(COL41=15.6000000),(COL30=4.3820000),(COL54=19.3000000),(COL65=34.2000000)', '(COL3=5),(COL4=25),(COL67=8),(COL68=8),(COL69=8),(COL70=8),(COL71=8),(COL72=8),(COL73=8),(COL74=24),(COL75=8),(COL76=8),(COL77=8),(COL78=8),(COL79=8),(COL80=8),(COL81=8),(COL82=8),(COL83=8),(COL84=8),(COL85=8),(COL86=8),(COL87=8),(COL88=8),(COL89=8),(COL90=8),(COL91=8),(COL92=8),(COL93=8),(COL94=8),(COL95=8),(COL96=8),(COL97=8),(COL98=8),(COL99=8),(COL100=2)','(COL3=5),(COL4=25),(COL67=8),(COL68=8),(COL69=8),(COL70=8),(COL71=8),(COL72=8),(COL73=8),(COL74=24),(COL75=8),(COL76=8),(COL77=8),(COL78=8),(COL79=8),(COL80=8),(COL81=8),(COL82=8),(COL83=8),(COL84=8),(COL85=8),(COL86=8),(COL87=8),(COL88=8),(COL89=8),(COL90=8),(COL91=8),(COL92=8),(COL93=8),(COL94=8),(COL95=8),(COL96=8),(COL97=8),(COL98=8),(COL99=8),(COL100=2)', '2', '1', 'Fail','2018-02-04 07:31:30','2018-02-04 07:31:52','Data match is different. 2 row(s) are not in target and 1 row(s) are not in source,, Average values are different for columns [COL6]')

错误是:

-chgrp: '' does not match expected pattern for group
Usage: hadoop fs [generic options] -chgrp [-R] GROUP PATH...
-chgrp: '' does not match expected pattern for group
Usage: hadoop fs [generic options] -chgrp [-R] GROUP PATH...

供参考:CREATE TABLE语句是:

CREATE EXTERNAL TABLE QA_Result (
    TableName String,
    SourceDB String,
    TargetDB String,
    StructureValidation String,
    SourceRecordCount BigInt,
    TargetRecordCount BigInt,
    SourceAverage String,
    TargetAverage String,
    SourceStringLength String,
    TargetStringLenght String,
    SourceDataDiff BigInt,
    TargetDataDiff BigInt,
    Status String,
    StartDateTime Timestamp,
    EndDateTime Timestamp,
    Comments String)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '|'
apache-spark hive pyspark pyspark-sql
1个回答
0
投票

hadoop-common code将直接打印到stderr,并且未使用任何记录器,因此您不能以常规方式抑制它。但是,您可以将自己的进程的stderr流设置为自定义类,并抑制错误(对我有用):

System.setErr(new SuppressErrors("org.apache.hadoop.fs"))

[这是SuppressErrors类:

class SuppressErrors(packages: String*) extends PrintStream(new FileOutputStream(FileDescriptor.err)) {

  def filter(): Boolean =
    Thread.currentThread()
      .getStackTrace
      .exists(el => packages.exists(el.getClassName.contains))

  override def write(b: Int): Unit = {
    if (!filter()) super.write(b)
  }

  override def write(buf: Array[Byte], off: Int, len: Int): Unit = {
    if (!filter()) super.write(buf, off, len)
  }

  override def write(b: Array[Byte]): Unit = {
    if (!filter()) super.write(b)
  }
}
© www.soinside.com 2019 - 2024. All rights reserved.