snowflake 卸载到 S3,因为 parquet 没有列名,也没有正确的数据类型

问题描述 投票:0回答:1

以下内容在 S3 中生成 parquet 文件:

USE DATABASE SANDBOX;
USE SCHEMA SANDBOX;

CREATE OR REPLACE FILE FORMAT my_parquet_format 
  TYPE = parquet;

COPY INTO @bla/x_
FROM (
    SELECT 
        TOP 10
        xxx AS "id",
    FROM table
)
FILE_FORMAT = (FORMAT_NAME = my_parquet_format)
OVERWRITE=TRUE;

唉,当我使用时,“id”列以 _COL_0 形式到达,数据类型是对象:

s3_path = 's3://ddd/dddd__0_0_0.snappy.parquet'
df = pd.read_parquet(s3_path, engine='pyarrow')

或达斯克。我试过:

USE DATABASE SANDBOX;
USE SCHEMA SANDBOX;

CREATE OR REPLACE FILE FORMAT my_parquet_format 
  TYPE = parquet;

COPY INTO @bla/x_
FROM (
    SELECT 
        TOP 10
        xxx AS "id",
    FROM table
)
FILE_FORMAT = (FORMAT_NAME = my_parquet_format)
OVERWRITE=TRUE HEADER=TRUE;

正如一些人所建议的,但它会产生损坏的镶木地板文件。有任何想法吗?谢谢!

python pandas snowflake-cloud-data-platform dask parquet
1个回答
0
投票

我将上面的内容更改如下。这会强制执行一些架构并提取列名称。猜猜不能强制执行 panda 的类别格式?

CREATE OR REPLACE FILE FORMAT my_parquet_format 
  TYPE = parquet;

COPY INTO @bla/x_
FROM (
    SELECT 
        TOP 10
        xxx::SMALLINT AS "id",
    FROM table
)
file_format = (type = 'parquet')
header = true overwrite = true;
© www.soinside.com 2019 - 2024. All rights reserved.