快速的FIFO队列有什么好办法?

问题描述 投票:0回答:1

我在python中从一个外部设备中采样,并将值存储在一个FIFO队列中。我有一个固定大小的数组,我用一个新的样本从一端enqueue,然后从另一端dequeue "最老 "的值(我的术语来自这里。https:/stackabuse.comstacks-and-queues-in-python。). 我为此尝试了不同的实现方式,每一种实现方式的性能都很大程度上取决于FIFO数组的大小,见下面的例子。是否存在比我收集的FIFO队列更快的方法。另外,在这些方法中,除了给定大小的队列我可以测量的速度之外,还有没有其他我应该关注的问题?

import numpy as np
import time
import numba

@numba.njit
def fifo(sig_arr, n):
    for i in range(n):
        sig_arr[:-1] = sig_arr[1:]
        sig_arr[-1] = i
    return

n = 1000000 # number of enqueues/dequeues
for m in [100, 1000, 10000]: # fifo queue length
    print("FIFO array length is:" + str(m))
    print("Numpy-based queue")
    sig_arr_np = np.zeros(m)
    for _ in range(5):
        tic = time.time()
        for i in range(n):
            sig_arr_np[:-1] = sig_arr_np[1:]
            sig_arr_np[-1] = i
        print(time.time() - tic)

    print("Jitted numpy-based queue")
    sig_arr_jit = np.zeros(m)
    for _ in range(5):
        tic = time.time()
        fifo(sig_arr_jit, n)
        print(time.time()-tic)

    print("list-based queue")
    sig_arr_list = [0]*m
    for _ in range(5):
        tic = time.time()
        for i in range(n):
            sig_arr_list.append(i)
            sig_arr_list.pop(0)
        print(time.time() - tic)
print("done...")

输出。

FIFO array length is:100
Numpy-based queue
0.7159860134124756
0.7160656452178955
0.7072808742523193
0.6405529975891113
0.6402220726013184
Jitted numpy-based queue
0.34624767303466797
0.10235905647277832
0.09779787063598633
0.10352706909179688
0.1059865951538086
list-based queue
0.19921231269836426
0.18682050704956055
0.178941011428833
0.190687894821167
0.18914198875427246
FIFO array length is:1000
Numpy-based queue
0.7035880088806152
0.7174069881439209
0.7061927318572998
0.7100749015808105
0.7161743640899658
Jitted numpy-based queue
0.4495429992675781
0.4449293613433838
0.4404451847076416
0.4400477409362793
0.43927478790283203
list-based queue
0.2652933597564697
0.26186203956604004
0.2784764766693115
0.27001261711120605
0.2699151039123535
FIFO array length is:10000
Numpy-based queue
2.0453989505767822
1.9288575649261475
1.9308562278747559
1.9575252532958984
2.048408269882202
Jitted numpy-based queue
5.075503349304199
5.083268404006958
5.181215286254883
5.115811109542847
5.163492918014526
list-based queue
1.2474076747894287
1.2347135543823242
1.2435767650604248
1.2809157371520996
1.237732172012329
done...

EDIT:这里我加入了Jeff H.建议的解决方案,并将deque设置为固定大小,这样就不需要.pop()方法,这样就会快一点。

n = 1000000 # number of enqueues/dequeues
for m in [100, 1000, 10000]: # fifo queue length
    print("deque-list-based queue")
    d = deque([None], m) 
    for _ in range(3):
        tic = time.time()
        for i in range(n):
            d.append(i)
        print(time.time() - tic) 
python performance fifo
1个回答
2
投票

你为什么不试试自然选择。collections.deque?

你上面的所有实现都存在同样的性能差,因为每次你查询任何东西时,它们都是O(N)操作,因为它们都是列表支持的。 对于FIFO来说,一个合适的数据结构可以在恒定的时间内完成O(1)操作。

考虑一下

从集合导入deque

from collections import deque

n = 1000000 # number of enqueues/dequeues
for m in [100, 1000, 10000, 1_000_000]: # fifo queue length
    print(f'\nqueue length: {m}')
    print('deque')
    d = deque(range(m))
    for _ in range(5):
        tic = time.time()
        for i in range(n):
            d.append(i)
            d.pop()
        print(time.time() - tic)
print("done...")

产量:(注意m值较大,时间近乎恒定,在任何尺寸下都比以上所有的产品好)

queue length: 100
deque
0.13888287544250488
0.13873004913330078
0.13820695877075195
0.1369168758392334
0.1436598300933838

queue length: 1000
deque
0.1434800624847412
0.13672494888305664
0.1380469799041748
0.14961719512939453
0.13932228088378906

queue length: 10000
deque
0.14437294006347656
0.14214491844177246
0.13336801528930664
0.14667487144470215
0.1375408172607422

queue length: 1000000
deque
0.13426589965820312
0.13596534729003906
0.13602590560913086
0.13472890853881836
0.134993314743042
done...
[Finished in 3.4s]
© www.soinside.com 2019 - 2024. All rights reserved.