如何将存储在GG Bigquery中的数据文件导出到GZ文件夹?

问题描述 投票:0回答:1

我正在使用与以下代码相似的代码将zip文件从bquery提取到GCS。有时我需要提取大约90个文件。我想提取一个压缩文件夹,而不是一一发送文件。注意:我正在使用Jupyter。感谢您的帮助。

from google.cloud import bigquery
client = bigquery.Client()

project_id = 'fh-bigquery'
dataset_id = 'public_dump'
table_id = 'afinn_en_165'


bucket_name = 'your_bucket'

destination_uri = 'gs://{}/{}'.format(bucket_name, 'file.csv.gz')

dataset_ref = client.dataset(dataset_id, project=project_id)
table_ref = dataset_ref.table(table_id)

job_config = bigquery.job.ExtractJobConfig()
job_config.compression = 'GZIP'
extract_job = client.extract_table(
    table_ref,
    destination_uri,
    job_config = job_config
) 
extract_job.result()`
google-bigquery gzip
1个回答
0
投票

我相信无法使用单个API请求提取整个数据集。要将相应的表导出到Google Cloud Storage存储桶中,我将采用以下代码来遍历一次存储每个表的tableID:

from google.cloud import bigquery
from google.oauth2 import service_account

key_path = "SERVICE_ACCOUNT_PATH"
credentials = service_account.Credentials.from_service_account_file(\
    key_path,
    scopes=["https://www.googleapis.com/auth/cloud- platform"],)

client = bigquery.Client()

project_id = 'PROJECT_ID'
dataset_id = 'DATASET_ID'
bucket_name = 'BUCKET_NAME'

dataset_ref = client.dataset(dataset_id, project=project_id)

for t in client.list_tables(dataset_ref):

    print("Extracting table {}".format(t.table_id))

    zip_file = '{}.csv.zip'.format(t.table_id)
    destination_uri = 'gs://{}/{}'.format(bucket_name, zip_file)

    table_ref = dataset_ref.table(t.table_id)

    job_config = bigquery.job.ExtractJobConfig()
    job_config.compression = 'GZIP'
    extract_job = client.extract_table(
        table_ref,
        destination_uri,
        job_config = job_config
    )
    extract_job.result()
© www.soinside.com 2019 - 2024. All rights reserved.