我想使用 python 的 Pandas 库读取 .xlsx 文件并将数据移植到 postgreSQL 表。
到目前为止我能做的就是:
import pandas as pd
data = pd.ExcelFile("*File Name*")
现在我知道该步骤已成功执行,但我想知道如何解析已读取的excel文件,以便我可以了解excel中的数据如何映射到变量数据中的数据。
如果我没记错的话,我了解到数据是一个 Dataframe 对象。那么我如何解析这个数据框对象以逐行提取每一行。
我通常会为每张纸创建一个包含
DataFrame
的字典:
xl_file = pd.ExcelFile(file_name)
dfs = {sheet_name: xl_file.parse(sheet_name)
for sheet_name in xl_file.sheet_names}
sheet_name=None
传递给 read_excel
: 可以更清晰地获得此行为
dfs = pd.read_excel(file_name, sheet_name=None)
在 0.20 及之前版本中,这是
sheetname
而不是 sheet_name
(现在已弃用,取而代之的是上面的内容):
dfs = pd.read_excel(file_name, sheetname=None)
pd.read_excel(file_name)
有时此代码会给 xlsx 文件带来错误:
XLRDError:Excel xlsx file; not supported
相反,您可以使用
openpyxl
引擎来读取Excel文件。
df_samples = pd.read_excel(r'filename.xlsx', engine='openpyxl')
以下对我有用:
from pandas import read_excel
my_sheet = 'Sheet1' # change it to your sheet name, you can find your sheet name at the bottom left of your excel file
file_name = 'products_and_categories.xlsx' # change it to the name of your excel file
df = read_excel(file_name, sheet_name = my_sheet)
print(df.head()) # shows headers with top 5 rows
DataFrame 的
read_excel
方法类似于 read_csv
方法:
dfs = pd.read_excel(xlsx_file, sheetname="sheet1")
Help on function read_excel in module pandas.io.excel:
read_excel(io, sheetname=0, header=0, skiprows=None, skip_footer=0, index_col=None, names=None, parse_cols=None, parse_dates=False, date_parser=None, na_values=None, thousands=None, convert_float=True, has_index_names=None, converters=None, true_values=None, false_values=None, engine=None, squeeze=False, **kwds)
Read an Excel table into a pandas DataFrame
Parameters
----------
io : string, path object (pathlib.Path or py._path.local.LocalPath),
file-like object, pandas ExcelFile, or xlrd workbook.
The string could be a URL. Valid URL schemes include http, ftp, s3,
and file. For file URLs, a host is expected. For instance, a local
file could be file://localhost/path/to/workbook.xlsx
sheetname : string, int, mixed list of strings/ints, or None, default 0
Strings are used for sheet names, Integers are used in zero-indexed
sheet positions.
Lists of strings/integers are used to request multiple sheets.
Specify None to get all sheets.
str|int -> DataFrame is returned.
list|None -> Dict of DataFrames is returned, with keys representing
sheets.
Available Cases
* Defaults to 0 -> 1st sheet as a DataFrame
* 1 -> 2nd sheet as a DataFrame
* "Sheet1" -> 1st sheet as a DataFrame
* [0,1,"Sheet5"] -> 1st, 2nd & 5th sheet as a dictionary of DataFrames
* None -> All sheets as a dictionary of DataFrames
header : int, list of ints, default 0
Row (0-indexed) to use for the column labels of the parsed
DataFrame. If a list of integers is passed those row positions will
be combined into a ``MultiIndex``
skiprows : list-like
Rows to skip at the beginning (0-indexed)
skip_footer : int, default 0
Rows at the end to skip (0-indexed)
index_col : int, list of ints, default None
Column (0-indexed) to use as the row labels of the DataFrame.
Pass None if there is no such column. If a list is passed,
those columns will be combined into a ``MultiIndex``
names : array-like, default None
List of column names to use. If file contains no header row,
then you should explicitly pass header=None
converters : dict, default None
Dict of functions for converting values in certain columns. Keys can
either be integers or column labels, values are functions that take one
input argument, the Excel cell content, and return the transformed
content.
true_values : list, default None
Values to consider as True
.. versionadded:: 0.19.0
false_values : list, default None
Values to consider as False
.. versionadded:: 0.19.0
parse_cols : int or list, default None
* If None then parse all columns,
* If int then indicates last column to be parsed
* If list of ints then indicates list of column numbers to be parsed
* If string then indicates comma separated list of column names and
column ranges (e.g. "A:E" or "A,C,E:F")
squeeze : boolean, default False
If the parsed data only contains one column then return a Series
na_values : scalar, str, list-like, or dict, default None
Additional strings to recognize as NA/NaN. If dict passed, specific
per-column NA values. By default the following values are interpreted
as NaN: '', '#N/A', '#N/A N/A', '#NA', '-1.#IND', '-1.#QNAN', '-NaN', '-nan',
'1.#IND', '1.#QNAN', 'N/A', 'NA', 'NULL', 'NaN', 'nan'.
thousands : str, default None
Thousands separator for parsing string columns to numeric. Note that
this parameter is only necessary for columns stored as TEXT in Excel,
any numeric columns will automatically be parsed, regardless of display
format.
keep_default_na : bool, default True
If na_values are specified and keep_default_na is False the default NaN
values are overridden, otherwise they're appended to.
verbose : boolean, default False
Indicate number of NA values placed in non-numeric columns
engine: string, default None
If io is not a buffer or path, this must be set to identify io.
Acceptable values are None or xlrd
convert_float : boolean, default True
convert integral floats to int (i.e., 1.0 --> 1). If False, all numeric
data will be read in as floats: Excel stores all numbers as floats
internally
has_index_names : boolean, default None
DEPRECATED: for version 0.17+ index names will be automatically
inferred based on index_col. To read Excel output from 0.16.2 and
prior that had saved index names, use True.
Returns
-------
parsed : DataFrame or Dict of DataFrames
DataFrame from the passed in Excel file. See notes in sheetname
argument for more information on when a Dict of Dataframes is returned.
如果您不知道或无法打开 Excel 文件来签入 ubuntu(在我的例子中,Python 3.6.7,ubuntu 18.04),我不使用工作表名称,而是使用参数 index_col (index_col=0对于第一张)
import pandas as pd
file_name = 'some_data_file.xlsx'
df = pd.read_excel(file_name, index_col=0)
print(df.head()) # print the first 5 rows
将电子表格文件名分配给
file
加载电子表格
打印工作表名称
按名称将工作表加载到 DataFrame 中:df1
file = 'example.xlsx'
xl = pd.ExcelFile(file)
print(xl.sheet_names)
df1 = xl.parse('Sheet1')
如果您在使用函数
read_excel()
打开的文件上使用 open()
,请确保将 rb
添加到打开函数以避免编码错误
要使用 pandas 读取.xlsx 文件,您首先需要编写以下代码:-
pd.read_excel(excel_file)
如果你想将此文件移动到 postgreSQL 中,你需要使用库“psycopy2”并连接 postqreSQL 数据库,这里是演示代码
conn = psycopg2.connect(
database="your_database",
user="your_username",
password="your_password",
host="your_host",
port="your_port"
)
之后,我们就可以在数据库中创建一个表了。
这里是演示代码
cursor = conn.cursor()
create_table_query = """
CREATE TABLE IF NOT EXISTS your_table (
column1 datatype1,
column2 datatype2,
...
)
"""
我们还需要确保我们拥有必要的权限并且我们的 postgreSQL 正在正确运行。
要了解更多关于 pandas 的信息,您可以访问我的博客:- 我的链接