与dask并行比顺序编码慢。
我有一个嵌套的for循环,我试图在本地集群上并行,但找不到正确的方法。
我想并行内循环。
我有2个大的numpy矩阵,我试图迭代并对矩阵的子集执行数学计算。尺寸:
data_mat.shape = (38, 243863)
indicies_mat.shape (243863, 27)
idxX.shape = (19,)
idxY.shape = (19,)
seq_code:
start = datetime.datetime.now()
for i in range(num+1):
if i == 0:
labels = np.array(true_labels)
else:
labels = label_mat[i]
idxX = list(np.where(labels == 1))
idxY = list(np.where(labels == 2))
ansColumn = []
for j in range(indices.shape[0]):
list_of_indices = [[i] for i in indices_slice]
dataX = (data_mat[idxX, list_of_indices]).T
dataY = (data_mat[idxY, list_of_indices]).T
ansColumn.append(calc_func(dataX, dataY))
if i == 0:
ansMat = ansColumn
else:
ansMat = np.c_[ansMat, ansColumn]
end = datetime.datetime.now()
print(end - start)
并行代码:
start = datetime.datetime.now()
cluster = LocalCluster(n_workers=4, processes=False)
client = Client(cluster)
for i in range(num+1):
if i == 0:
labels = np.array(true_labels)
else:
labels = label_mat[i]
idxX = list(np.where(labels == 1))
idxY = list(np.where(labels == 2))
[big_future] = client.scatter([data_mat], broadcast=True)
[idx_b] = client.scatter([idxX], broadcast=True)
[idy_b] = client.scatter([idxY], broadcast=True)
futures = [client.submit(prep_calc_func, idx_b, idy_b, indices[j, :], big_future) for j in range(indices.shape[0])]
ansColumn = []
for fut in dask.distributed.client.as_completed(futures):
ansColumn.append(fut.result())
if i == 0:
ansMat = ansColumn
else:
ansMat = np.c_[ansMat, ansColumn]
end = datetime.datetime.now()
print(end - start)
辅助功能:
def = prep_calc_func(idxX, idxY, subset_of_indices, data_mat):
list_of_indices = [[i] for i in indices_slice]
dataX = (data_mat[idxX, subset_of_indices]).T
dataY = (data_mat[idxY, subset_of_indices]).T
ret_val = calc_func(dataX, dataY)
return ret_val
本地机:MacBook Pro(Retina,13英寸,2014年中)处理器:2.6 GHz Intel Core i5
hw.physicalcpu:2 hw.logicalcpu:4
内存:8 GB 1600 MHz DDR3
当我执行seq代码时,需要01:52分钟才能完成(不到2分钟)
但是当我尝试并行代码时需要花费超过15分钟的时间。 (无论我使用哪种方法:计算,结果和client.submit或dask延迟)
(我更喜欢使用dask分布式软件包,因为下一阶段也可能使用远程集群。)
知道我做错了什么吗?
事情可能很慢的原因有很多。可能会有很多沟通。您的任务可能太小(回想一下Dask的开销大约是每个任务1毫秒),或完全不同的东西。有关了解Dask性能的更多信息,我建议使用以下文档: