我的多处理池(8核,16 GB RAM)在摄取大量数据之前正在使用我的所有内存。我正在使用6 GB的数据集]
[我尝试使用各种类型的处理器,包括imap,imap_unordered,apply,map等。我也尝试过maxtasksperchild,这似乎会增加内存使用。
import string
import re
import multiprocessing as mp
from tqdm import tqdm
linkregex = re.compile(r"http\S+")
puncregex = re.compile(r"(?<=\w)[^\s\w](?![^\s\w])")
emojiregex = re.compile(r"(\u00a9|\u00ae|[\u2000-\u3300]|\ud83c[\ud000-\udfff]|\ud83d[\ud000-\udfff]|\ud83e[\ud000-\udfff])")
sentences = []
def process(item):
return re.sub(emojiregex, r" \1 ", re.sub(puncregex,"",re.sub(linkregex, "link", item))).lower().split()
if __name__ == '__main__':
with mp.Pool(8) as pool:
sentences = list(tqdm(pool.imap_unordered(process, open('scrape/output.txt')
), total=52123146))
print(str(len(sentences)))
with open("final/word2vectweets.txt", "a+") as out:
out.write(sentences)
这应该从文件中返回已处理行的列表,但是它消耗的内存太快。没有mp和更简单处理的先前版本已经成功。
看起来如何?
import re
import multiprocessing as mp
linkregex = re.compile(r"http\S+")
puncregex = re.compile(r"(?<=\w)[^\s\w](?![^\s\w])")
emojiregex = re.compile(r"(\u00a9|\u00ae|[\u2000-\u3300]|\ud83c[\ud000-\udfff]|\ud83d[\ud000-\udfff]|\ud83e[\ud000-\udfff])")
def process(item):
return re.sub(emojiregex, r" \1 ", re.sub(puncregex,"",re.sub(linkregex, "link", item))).lower().split()
with mp.Pool() as pool, open(in_file_path, 'r') as file_in, open(out_file_path, 'a') as file_out:
for curr_sentence in pool.imap_unordered(process_line, file_in, chunksize=1000):
file_out.write(f'{curr_sentence}\n')
我测试了一堆大块的大小,1000似乎是最佳选择。我将继续调查。