将一个 csv 拆分为多个文件

问题描述 投票:0回答:13

我在 python 中有一个大约 5000 行的 csv 文件,我想将其分成五个文件。

我为其编写了代码,但它不起作用

import codecs
import csv
NO_OF_LINES_PER_FILE = 1000
def again(count_file_header,count):
    f3 = open('write_'+count_file_header+'.csv', 'at')
    with open('import_1458922827.csv', 'rb') as csvfile:
        candidate_info_reader = csv.reader(csvfile, delimiter=',', quoting=csv.QUOTE_ALL)
        co = 0      
        for row in candidate_info_reader:
            co = co + 1
            count  = count + 1
            if count <= count:
                pass
            elif count >= NO_OF_LINES_PER_FILE:
                count_file_header = count + NO_OF_LINES_PER_FILE
                again(count_file_header,count)
            else:
                writer = csv.writer(f3,delimiter = ',', lineterminator='\n',quoting=csv.QUOTE_ALL)
                writer.writerow(row)

def read_write():
    f3 = open('write_'+NO_OF_LINES_PER_FILE+'.csv', 'at')
    with open('import_1458922827.csv', 'rb') as csvfile:


        candidate_info_reader = csv.reader(csvfile, delimiter=',', quoting=csv.QUOTE_ALL)

        count = 0       
        for row in candidate_info_reader:
            count  = count + 1
            if count >= NO_OF_LINES_PER_FILE:
                count_file_header = count + NO_OF_LINES_PER_FILE
                again(count_file_header,count)
            else:
                writer = csv.writer(f3,delimiter = ',', lineterminator='\n',quoting=csv.QUOTE_ALL)
                writer.writerow(row)

read_write()

上面的代码创建了许多内容为空的文件。

如何将一个文件拆分为五个csv文件?

python csv split
13个回答
49
投票

Python

使用

readlines()
writelines()
来做到这一点,这里是一个例子:

>>> csvfile = open('import_1458922827.csv', 'r').readlines()
>>> filename = 1
>>> for i in range(len(csvfile)):
...     if i % 1000 == 0:
...         open(str(filename) + '.csv', 'w+').writelines(csvfile[i:i+1000])
...         filename += 1

输出文件名将编号为

1.csv
2.csv
、...等。

从航站楼

仅供参考,您可以使用

split
从命令行执行此操作,如下所示:

$ split -l 1000 import_1458922827.csv

38
投票

我建议你不要发明轮子。有现有的解决方案。来源这里

import os


def split(filehandler, delimiter=',', row_limit=1000,
          output_name_template='output_%s.csv', output_path='.', keep_headers=True):
    import csv
    reader = csv.reader(filehandler, delimiter=delimiter)
    current_piece = 1
    current_out_path = os.path.join(
        output_path,
        output_name_template % current_piece
    )
    current_out_writer = csv.writer(open(current_out_path, 'w'), delimiter=delimiter)
    current_limit = row_limit
    if keep_headers:
        headers = reader.next()
        current_out_writer.writerow(headers)
    for i, row in enumerate(reader):
        if i + 1 > current_limit:
            current_piece += 1
            current_limit = row_limit * current_piece
            current_out_path = os.path.join(
                output_path,
                output_name_template % current_piece
            )
            current_out_writer = csv.writer(open(current_out_path, 'w'), delimiter=delimiter)
            if keep_headers:
                current_out_writer.writerow(headers)
        current_out_writer.writerow(row)

像这样使用它:

split(open('/your/pat/input.csv', 'r'));

8
投票

Python3 友好的解决方案:

def split_csv(source_filepath, dest_folder, split_file_prefix,
                records_per_file):
    """
    Split a source csv into multiple csvs of equal numbers of records,
    except the last file.

    Includes the initial header row in each split file.

    Split files follow a zero-index sequential naming convention like so:

        `{split_file_prefix}_0.csv`
    """
    if records_per_file <= 0:
        raise Exception('records_per_file must be > 0')

    with open(source_filepath, 'r', encoding='utf8') as source:
        reader = csv.reader(source)
        headers = next(reader)

        file_idx = 0
        records_exist = True

        while records_exist:

            i = 0
            target_filename = f'{split_file_prefix}_{file_idx}.csv'
            target_filepath = os.path.join(dest_folder, target_filename)

            with open(target_filepath, 'w') as target:
                writer = csv.writer(target)

                while i < records_per_file:
                    if i == 0:
                        writer.writerow(headers)

                    try:
                        writer.writerow(next(reader))
                        i += 1
                    except StopIteration:
                        records_exist = False
                        break

            if i == 0:
                # we only wrote the header, so delete that file
                os.remove(target_filepath)

            file_idx += 1

6
投票

使用 Pandas 的简单 Python 3 解决方案,不会切断最后一批

def to_csv_batch(src_csv, dst_dir, size=30000, index=False):

    import pandas as pd
    import math
    
    # Read source csv
    df = pd.read_csv(src_csv)
    
    # Initial values
    low = 0
    high = size

    # Loop through batches
    for i in range(math.ceil(len(df) / size)):

        fname = dst_dir+'/Batch_' + str(i+1) + '.csv'
        df[low:high].to_csv(fname, index=index)
        
        # Update selection
        low = high
        if (high + size < len(df)):
            high = high + size
        else:
            high = len(df)

使用示例

to_csv_batch('Batch_All.csv', 'Batches')

5
投票

我稍微修改了接受的答案以使其更简单

编辑:添加了导入语句,修改了打印异常的打印语句。 @Alex F 代码片段是为 python2 编写的,对于 python3,您还需要使用

header_row = rows.__next__()
代替
header_row = rows.next()
。谢谢指出。

import os
import csv
def split_csv_into_chunks(file_location, out_dir, file_size=2):
    count = 0
    current_piece = 1

    # file_to_split_name.csv
    file_name = file_location.split("/")[-1].split(".")[0]
    split_file_name_template = file_name + "__%s.csv"
    splited_files_path = []

    if not os.path.exists(out_dir):
        os.makedirs(out_dir)
    try:
        with open(file_location, "rb") as csv_file:
            rows = csv.reader(csv_file, delimiter=",")
            headers_row = rows.next()
            for row in rows:
                if count % file_size == 0:
                    current_out_path = os.path.join(out_dir,
                                                    split_file_name_template%str(current_piece))
                    current_out_writer = None

                    current_out_writer = csv.writer(open(current_out_path, 'w'), delimiter=",")
                    current_out_writer.writerow(headers_row)
                    splited_files_path.append(current_out_path)
                    current_piece += 1

                current_out_writer.writerow(row)
                count += 1
        return True, splited_files_path
    except Exception as e:
        print("Exception occurred as {}".format(e))
        return False, splited_files_path

4
投票

另一个 pandas 解决方案(每 1000 行),类似于 Aziz Alto 解决方案:

suffix = 1
for i in range(len(df)):
    if i % 1000 == 0:
        df[i:i+1000].to_csv(f"processed/{filename}_{suffix}.csv", sep ='|', index=False, index_label=False)
        suffix += 1

其中

df
是作为 pandas.DataFrame 加载的 csv;
filename
为原始文件名,管道为分隔符;
index
index_label
false 是跳过自动增量索引列


3
投票

@Ryan,Python3 代码对我有用。我使用如下

newline=''
来避免空行问题:

with open(target_filepath, 'w', newline='') as target:

1
投票
if count <= count:
   pass

这个条件始终为真,所以你每次都会通过

否则你可以看看这篇文章:将 CSV 文件分割成相等的部分?


1
投票

我建议你利用 pandas 提供的可能性。以下是您可以用来执行此操作的函数:

def csv_count_rows(file):
    """
    Counts the number of rows in a file.
    :param file: path to the file.
    :return: number of lines in the designated file.
    """
    with open(file) as f:
        nb_lines = sum(1 for line in f)
    return nb_lines


def split_csv(file, sep=",", output_path=".", nrows=None, chunksize=None, low_memory=True, usecols=None):
    """
    Split a csv into several files.
    :param file: path to the original csv.
    :param sep: View pandas.read_csv doc.
    :param output_path: path in which to output the resulting parts of the splitting.
    :param nrows: Number of rows to split the original csv by, also view pandas.read_csv doc.
    :param chunksize: View pandas.read_csv doc.
    :param low_memory: View pandas.read_csv doc.
    :param usecols: View pandas.read_csv doc.
    """
    nb_of_rows = csv_count_rows(file)

    # Parsing file elements : Path, name, extension, etc...
    # file_path = "/".join(file.split("/")[0:-1])
    file_name = file.split("/")[-1]
    # file_ext = file_name.split(".")[-1]
    file_name_trunk = file_name.split(".")[0]
    split_files_name_trunk = file_name_trunk + "_part_"

    # Number of chunks to partition the original file into
    nb_of_chunks = math.ceil(nb_of_rows / nrows)
    if nrows:
        log_debug_process_start = f"The file '{file_name}' contains {nb_of_rows} ROWS. " \
            f"\nIt will be split into {nb_of_chunks} chunks of a max number of rows : {nrows}." \
            f"\nThe resulting files will be output in '{output_path}' as '{split_files_name_trunk}0 to {nb_of_chunks - 1}'"
        logging.debug(log_debug_process_start)

    for i in range(nb_of_chunks):
        # Number of rows to skip is determined by (the number of the chunk being processed) multiplied by (the nrows parameter).
        rows_to_skip = range(1, i * nrows) if i else None
        output_file = f"{output_path}/{split_files_name_trunk}{i}.csv"

        log_debug_chunk_processing = f"Processing chunk {i} of the file '{file_name}'"
        logging.debug(log_debug_chunk_processing)

        # Fetching the original csv file and handling it with skiprows and nrows to process its data
        df_chunk = pd.read_csv(filepath_or_buffer=file, sep=sep, nrows=nrows, skiprows=rows_to_skip,
                               chunksize=chunksize, low_memory=low_memory, usecols=usecols)
        df_chunk.to_csv(path_or_buf=output_file, sep=sep)

        log_info_file_output = f"Chunk {i} of file '{file_name}' created in '{output_file}'"
        logging.info(log_info_file_output)

然后在你的主笔记本或 jupyter 笔记本中放入:

# This is how you initiate logging in the most basic way.
logging.basicConfig(level=logging.DEBUG)
file = {#Path to your file}
split_csv(file,sep=";" ,output_path={#Path where you'd like to output it},nrows = 4000000, low_memory = False)

P.S.1:我放

nrows = 4000000
是因为这是个人喜好。如果您愿意,您可以更改该号码。

P.S.2:我使用日志库来显示消息。当对远程服务器上存在的大文件应用这样的功能时,您确实希望避免“简单打印”并合并日志记录功能。您可以将

logging.info
logging.debug
替换为
print

P.S.3 : 当然,您需要将代码中的

{# Blablabla}
部分替换为您自己的参数。


1
投票

一个更简单的脚本适合我。

import pandas as pd
path = "path to file" # path to file
df = pd.read_csv(path) # reading file

low = 0 # Initial Lower Limit
high = 1000 # Initial Higher Limit
while(high < len(df)):
    df_new = df[low:high] # subsetting DataFrame based on index
    low = high #changing lower limit
    high = high + 1000 # givig uper limit with increment of 1000
    df_new.to_csv("Path to output file") # output file 

1
投票

基于投票最高的答案,这是一个 python 解决方案,其中还包含每个文件中的标头。

file = open('file.csv', 'r')
header = file.readline()
csvfile = file.readlines()
filename = 1
batch_size = 1000
for i in range(len(csvfile)):
        if i % batch_size == 0:
                open(str(filename) + '.csv', 'w+').writelines(header)
                open(str(filename) + '.csv', 'a+').writelines(csvfile[i:i+batch_size])
                filename += 1

这将输出与 1.csv、2.csv...等相同的文件名。


0
投票
import pandas as pd

df = pd.read_csv('input.csv')

file_len = len(df)
filename = 'output'
n = 1
for i in range(file_len):
    if i % 10 == 0:
        sf = (df[i:i+10])
        sf.to_csv(f'{filename}_{n}.csv', index=False)
        n += 1

0
投票

以下是一个非常简单的解决方案,它不会循环遍历所有行,而仅循环遍历块 - 想象一下,如果您有数百万行。

chunk_size = 100_000
for i in range(len(df) // chunk_size + 1):
    df[i*chunk_size:(i+1)*chunk_size].to_csv(f"output_{i:02d}.csv", 
                                             sep=";", index=False)

您定义块大小,如果总行数不是块大小的整数倍,则最后一个块将包含其余部分。

使用

f"output_{i:02d}.csv"
后缀将被格式化为两位数字和前导零

如果您只想为第一个块提供标头(而其他块没有标头),那么您可以在

i == 0
处的后缀索引上使用布尔值,即:

for i in range(len(df) // chunk_size + 1):
    df[i*chunk_size:(i+1)*chunk_size].to_csv(f"output_{i:02d}.csv", 
                                             sep=";", index=False, header=(i == 0))
© www.soinside.com 2019 - 2024. All rights reserved.