i以不同的方式在dask中运行相同的数据集。我发现一种方法几乎比另一种方法快十倍!!!我试图找到失败的原因。
import dask.dataframe as dd
from multiprocessing import cpu_count
#Count the number of cores
cores = cpu_count()
#read and part the dataframes by the number of cores
english = dd.read_csv('/home/alberto/Escritorio/pycharm/NLP/ignore_files/es-en/europarl-v7.es-en.en',
sep='\r', header=None, names=['ingles'], dtype={'ingles':str})
english = english.repartition(npartitions=cores)
spanish = dd.read_csv('/home/alberto/Escritorio/pycharm/NLP/ignore_files/es-en/europarl-v7.es-en.es',
sep='\r', header=None, names=['espanol'], dtype={'espanol':str})
spanish = english.repartition(npartitions=cores)
#compute
%time total_dd = dd.merge(english, spanish, left_index=True, right_index=True).compute()
Out: 9.77 seg
import pandas as pd
import dask.dataframe as dd
from multiprocessing import cpu_count
#Count the number of cores
cores = cpu_count()
#Read the Dataframe and part by the number of cores
pd_english = pd.read_csv('/home/alberto/Escritorio/pycharm/NLP/ignore_files/es-en/europarl-v7.es-en.en',
sep='\r', header=None, names=['ingles'])
pd_spanish = pd.read_csv('/home/alberto/Escritorio/pycharm/NLP/ignore_files/es-en/europarl-v7.es-en.es',
sep='\r', header=None, names=['espanol'])
english_pd = dd.from_pandas(pd_english, npartitions=cores)
spanish_pd = dd.from_pandas(pd_spanish, npartitions=cores)
#compute
%time total_pd = dd.merge(english_pd, spanish_pd, left_index=True, right_index=True).compute()
Out: 1.31 seg
有人知道为什么吗?还有其他方法可以更快地做到吗?
读取两个DataFrame,
merge。
显然,源数据帧很大,读取它们需要相当长的时间,在第二个变体中没有考虑。尝试另一个测试:创建一个函数,该函数:读取两个数据帧