通过python加快对Oracle数据库的加载速度

问题描述 投票:0回答:1

我正在尝试使用cx_Oracle和python将csv文件转储到我的表中。但是它的运行速度令人难以忍受(336秒内达到500条记录)。如果有其他方法,请让我快一点。代码如下

import pandas as pd
import cx_Oracle
import time


connection_string = '{}/{}@//{}:{}/{}'.format(user_name, password, host_name, port, service_name)
engine = cx_Oracle.connect(connection_string)


start_time = time.time()

t = pd.read_sql(con=engine, sql='select * from students where rownum < 18000')

print(t.shape)

t.to_sql(con=engine, name='students_new', if_exists='append', index=False)

print("Finished in : " + str(round(time.time() - start_time, 2)))
python-3.x oracle cx-oracle
1个回答
2
投票

您提供的示例代码与书面问题不匹配。

如果要使用Python将数据从CSV文件加载到Oracle数据库,则直接的cx_Oracle示例在手册的Loading CSV Files into Oracle Database部分中。您需要使用executemany()在每次调用数据库时上传尽可能多的数据。

要从手册中剪切和粘贴:

import cx_Oracle
import csv

. . .

# Predefine the memory areas to match the table definition
cursor.setinputsizes(None, 25)

# Adjust the batch size to meet your memory and performance requirements
batch_size = 10000

with open('testsp.csv', 'r') as csv_file:
    csv_reader = csv.reader(csv_file, delimiter=',')
    sql = "insert into test (id,name) values (:1, :2)"
    data = []
    for line in csv_reader:
        data.append((line[0], line[1]))
        if len(data) % batch_size == 0:
            cursor.executemany(sql, data)
            data = []
    if data:
        cursor.executemany(sql, data)
    con.commit()
© www.soinside.com 2019 - 2024. All rights reserved.