我正在使用pyspark == 2.4.3,我只想运行一个hql文件
use myDatabaseName;
show tables;
这是我尝试过的
from os.path import expanduser, join, abspath
from pyspark.sql import SparkSession
from pyspark.sql import Row
# warehouse_location points to the default location for managed databases and tables
warehouse_location = abspath('spark-warehouse')
spark = SparkSession \
.builder \
.appName("Python Spark SQL Hive integration example") \
.config("spark.sql.warehouse.dir", warehouse_location) \
.enableHiveSupport() \
.getOrCreate()
with open('full/path/to/my/hqlfile') as t:
q=t.read()
print q
'use myDatabaseName;show tables;\n'
spark.sql(q)
但是我得到
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/some/path/python2.7/site-packages/pyspark/sql/session.py", line 767, in sql
return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
File "/some/path/python2.7/site-packages/py4j/java_gateway.py", line 1257, in __call__
answer, self.gateway_client, self.target_id, self.name)
File "/some/path/python2.7/site-packages/pyspark/sql/utils.py", line 73, in deco
raise ParseException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.ParseException: u"\nmismatched input ';' expecting <EOF>(line 1, pos 11)\n\n== SQL ==\nuse myDatabaseName;show tables;\n-----------^^^\n"
我在做什么错?
像建议的错误一样,;
在spark.sql中不是有效的语法,
第二,不能在单个spark.sql调用中调用两个命令。
我将q
修改为查询字符串列表,但其中没有;
,然后进行循环。
query_lt = q.split(";")[:-1]
for qs in query_lt:
spark.sql(qs)