在多线程中处理一个巨大的文件并将其写回另一个文件,

问题描述 投票:0回答:1

我有一个巨大的XML文件(差不多5Gig)。我尝试搜索整个文件,找到一些标签并重命名。我在here中使用相同的想法将文件分块为10兆字节块,搜索每个块,如果该块包含搜索项,则将块发送到另一个帮助程序以逐行读取块并替换标记。这是行不通的!似乎当它尝试合并队列并将文件写回来时,它不起作用,结果文件从任意位置开始。

import re, threading, Queue
FILE_R = "C:\\Users\\USOMZIA\Desktop\\ABB_Work\\ERCOT\\Modifying_cim_model\\omid2.xml"
FILE_WR = "C:\\Users\\USOMZIA\Desktop\\ABB_Work\\ERCOT\\Modifying_cim_model\\x3.xml"
def get_chunks(file_r, size = 1024 * 1024):
    with open(file_r, 'rb') as f:
        while 1:
            start = f.tell()
            f.seek(size, 1)
            s = f.readline()
            yield start, f.tell() - start

            if not s:
                break

def process_line_by_line(file_r, chunk):
    with open(file_r, "rb") as f:
        f.seek(chunk[0])
        read_line_list = []
        for line_f in f.read(chunk[1]).splitlines():
            find_match = False
            for match_str in mapp:
                if match_str in str(line_f):
                    find_match = True
                    new_line = str(line_f).replace(match_str, mapp[match_str]) 
                    read_line_list.append(new_line)
                    break
            if not find_match:
                read_line_list.append(str(line_f))

    return read_line_list

def process(file_r, chunk):
    read_group_list = []
    with open(file_r, "r") as f:
        f.seek(chunk[0])
        s = f.read(chunk[1])
        if len(pattern.findall(s)) > 0:
            read_group_list = process_line_by_line(file_r, chunk)
        else:
            read_group_list = f.read(chunk[1]).splitlines()
    return read_group_list

class Worker(threading.Thread):
    def run(self):
        while 1:
            chunk = queue.get()
            if chunk is None:
                break
            result.append(process(*chunk))
            queue.task_done()       





import time, sys
start_time = time.time()
pattern_list = []
mapp = {"cim:ConformLoad rdf:ID": "cim:CustomerLoad rdf:ID", "cim:Load rdf:ID": "cim:CustomerLoad rdf:ID", "cim:NonConformLoad rdf:ID": "cim:CustomerLoad rdf:ID", 
        "cim:InductionMotorLoad rdf:ID": "cim:CustomerLoad rdf:ID", "cim:NonConformLoadGroup rdf:ID": "cim:ConformLoadGroup rdf:ID",
        "cim:NonConformLoad.LoadGroup": "cim:ConformLoad.LoadGroup",
        "/cim:ConformLoad>": "/cim:CustomerLoad>", "/cim:Load>": "/cim:CustomerLoad>", "/cim:NonConformLoad>": "/cim:CustomerLoad>",
        "/cim:InductionMotorLoad>": "/cim:CustomerLoad>", "/cim:NonConformLoadGroup>": "/cim:ConformLoadGroup>"}
reg_string =""
for key in mapp:
    reg_string = reg_string + key+ "|"
# to delete the last |
reg_string = list(reg_string)[:-1]
reg_string = ''.join(reg_string)
pattern = re.compile(r"cim:%s.*" %reg_string)
# This makes it faster than write an mo = pattern.search(line) in the loop
search = pattern.search
queue = Queue.Queue()
result = []
# Start the multithread
for i in range(1):
    w = Worker()
    w.setDaemon(1)
    w.start()

chunks = get_chunks(FILE_R, 10 * 1024 * 1024)
for chunk in chunks:
    print chunk
    queue.put((FILE_R, chunk))
queue.join()

with open(FILE_WR, "w") as f:
    for file_chunk in range(len(result)):
        for line in result[file_chunk]:
            f.write("%s\n" % line)


print time.time() - start_time

所以,我认为问题是当队列中的作业完成时,它们不是按顺序排列,因此它不同步。无论如何我可以以某种方式同步他们?感谢您的帮助!

python multithreading large-files
1个回答
© www.soinside.com 2019 - 2024. All rights reserved.