Python cx_Oracle Insert Into table with multiple columns automating the values (1:,2:... 100:)

问题描述 投票:0回答:1

我正在开发一个脚本,在一个环境下从一个有75列的oracle表中读取数据,并在另一个环境下将其加载到相同的表定义中。到目前为止,我一直使用cx_Oracle cur.execute()方法来 "INSERT INTO TABLENAME VALUES(:1,:2,:3...:8);",然后使用 "cur.execute(sql, conn) "方法加载数据。

然而,我试图加载的这个表有大约75个以上的列,写(:1, :2 ... :75)会很繁琐,我猜想这不是最佳实践的一部分。

有没有一种自动化的方法来循环处理列数,并自动填充SQL查询的values()部分。

user = 'username'
pass = getpass.getpass()
connection_prod = cx_Oracle.makedsn(host, port, service_name = '')
cursor_prod = connection_prod.cursor()

connection_dev = cx_Oracle.makedsn(host, port, service_name = '')
cursor_dev = connection_dev.cursor()

SQL_Read = """Select * from Table_name_Prod"""
Data = cur.execute(SQL_Read, connection_prod)
for row in Data:
    SQL_Load = "INSERT INTO TABLE_NAME_DEV VALUES(:1, :2,:3, :4 ...:75);" --This part is ugly and tedious.
    cursor_dev.execute(SQL_LOAD, row)

这就是我需要帮助的地方

connection_Prod.commit()
cursor_Prod.close()
connection_Prod.close()
python-3.x jupyter-notebook sql-insert cx-oracle
1个回答
3
投票

你可以做以下工作,这不仅有助于减少代码,而且有助于提高性能。

connection_prod = cx_Oracle.connect(...)
cursor_prod = connection_prod.cursor()

# set array size for source cursor to some reasonable value
# increasing this value reduces round-trips but increases memory usage
cursor_prod.arraysize = 500

connection_dev = cx_Oracle.connect(...)
cursor_dev = connection_dev.cursor()

cursor_prod.execute("select * from table_name_prod")
bind_names = ",".join(":" + str(i + 1) \
        for i in range(len(cursor_prod.description)))
sql_load = "insert into table_name_dev values (" + bind_names + ")"
while True:
    rows = cursor_prod.fetchmany()
    if not rows:
        break
    cursor_dev.executemany(sql_load, rows)
    # can call connection_dev.commit() here if you want to commit each batch

使用cursor.executemany()将显著提高性能。希望对你有所帮助!

© www.soinside.com 2019 - 2024. All rights reserved.