这是查询:
SELECT *
FROM my_db.sys.sql_modules
WHERE object_id = OBJECT_ID('my_db.dbo.view_name')
在 Azure Data Studio 中执行时,查询将返回正确的视图定义
但是当通过
com.microsoft.sqlserver.jdbc.spark
运行查询时:
def exec_query(*, query: str, fetch_size: str = _fetch_size):
return (
spark.read.format("com.microsoft.sqlserver.jdbc.spark")
.option("url", _sqlserver_url)
.option("user", _sqlserver_username)
.option("password", _sqlserver_pwd)
.option("query", query)
.option("fetchsize", fetch_size)
.load()
)
它给出了错误的结果:
+----------+----------+---------------+----------------------+---------------+-----------------------+-------------+------------------+-----------------------+-----------------------+-----------+-------------+
| object_id|definition|uses_ansi_nulls|uses_quoted_identifier|is_schema_bound|uses_database_collation|is_recompiled|null_on_null_input|execute_as_principal_id|uses_native_compilation|inline_type|is_inlineable|
+----------+----------+---------------+----------------------+---------------+-----------------------+-------------+------------------+-----------------------+-----------------------+-----------+-------------+
|1862401804| NULL| true| true| false| false| false| false| NULL| false| false| false|
+----------+----------+---------------+----------------------+---------------+-----------------------+-------------+------------------+-----------------------+-----------------------+-----------+-------------+
很奇怪,object_id 是正确的,但定义给出了
NULL
我该如何解决这个问题?
Azure Data Studio 和脚本使用相同的登录凭据。