Pandas:读取多个大的.bz2文件并附加它

问题描述 投票:0回答:1

我有30个要读取的.bz2文件。每个文件太大而无法读取,因此每个文件的x大小块就足够了。然后,我想将所有这30个文件合并在一起。

import pandas as pd
import numpy as np
import glob
path = r'/content/drive/My Drive/'                     # use your path
all_files = glob.glob(os.path.join(path, "*.bz2"))     # advisable to use os.path.join as this makes concatenation OS independent

# Below I read 10,000 lines X 11 in for each file because of RAM limit and append it together. 
# How do I make it so it also appends each of the 30 files together?? I made an attempt below.

chunks = (pd.read_json(f, lines=True, chunksize = 1000) for f in all_files)
i = 0
chunk_list = []
for chunk in chunks:
    if i >= 11:
        break
    i += 1
    chunk_list.append(chunk)
    df = pd.concat(chunk_list, sort = True)
#print(df)
df

样本.bz2数据可在以下位置找到:https://csr.lanl.gov/data/2017.html

python json pandas for-loop glob
1个回答
0
投票
import os, json
import pandas as pd
import numpy as np
import glob
pd.set_option('display.max_columns', None)

temp = pd.DataFrame()

path_to_json = '/content/drive/My Drive/' 

json_pattern = os.path.join(path_to_json,'*.bz2')
file_list = glob.glob(json_pattern)

for file in file_list:
    chunks = pd.read_json(file, lines=True, chunksize=1000)
    i = 0
    chunk_list = []
    for chunk in chunks:
        if i >= 10:
            break
        i += 1
        chunk_list.append(chunk)
        df = pd.concat(chunk_list, sort = True)
    temp = temp.append(df, sort = True)
temp

这似乎有效

© www.soinside.com 2019 - 2024. All rights reserved.