提高最小/最大采样

问题描述 投票:7回答:4

我有一些大的数组(约100万点),我需要以交互方式绘制。我currenlty使用Matplotlib。绘制数组作为-是变得非常慢,是一种浪费,因为你不能想像那么多分呢。

所以,我提出,我绑在轴线的“xlim_changed”回调一个最小/最大抽取功能。我用最小/最大方法去,因为数据包含我不想通过只通过数据步进错过快速冲高。还有更多的包装该作物于x限制,以及跳过处理在一定条件下但相关的部分是以下:

def min_max_downsample(x,y,num_bins):
    """ Break the data into num_bins and returns min/max for each bin"""
    pts_per_bin = x.size // num_bins    

    #Create temp to hold the reshaped & slightly cropped y
    y_temp = y[:num_bins*pts_per_bin].reshape((num_bins, pts_per_bin))
    y_out      = np.empty((num_bins,2))
    #Take the min/max by rows.
    y_out[:,0] = y_temp.max(axis=1)
    y_out[:,1] = y_temp.min(axis=1)
    y_out = y_out.ravel()

    #This duplicates the x-value for each min/max y-pair
    x_out = np.empty((num_bins,2))
    x_out[:] = x[:num_bins*pts_per_bin:pts_per_bin,np.newaxis]
    x_out = x_out.ravel()
    return x_out, y_out

这工作得很好,是足够快(80毫秒〜1E8上点和2K箱)。很少有滞后,因为它周期性地重新计算和更新生产线的X和Y的数据。

但是,我唯一的抱怨是在x数据。这段代码复制每个容器的左边缘的x值和不返回在Y最小/最大对真正的X位置。我通常设定区间的数目于轴线像素宽度的两倍。所以你不能真正看到区别,因为垃圾箱这么小......但我知道它的存在......它的错误我。

所以尝试其确实为每最小/最大对返回实际的x值数2。然而,它是大约5倍速度较慢。

def min_max_downsample_v2(x,y,num_bins):
    pts_per_bin = x.size // num_bins
    #Create temp to hold the reshaped & slightly cropped y
    y_temp = y[:num_bins*pts_per_bin].reshape((num_bins, pts_per_bin))
    #use argmax/min to get column locations
    cc_max = y_temp.argmax(axis=1)
    cc_min = y_temp.argmin(axis=1)    
    rr = np.arange(0,num_bins)
    #compute the flat index to where these are
    flat_max = cc_max + rr*pts_per_bin
    flat_min = cc_min + rr*pts_per_bin
    #Create a boolean mask of these locations
    mm_mask  = np.full((x.size,), False)
    mm_mask[flat_max] = True
    mm_mask[flat_min] = True  
    x_out = x[mm_mask]    
    y_out = y[mm_mask]  
    return x_out, y_out

这需要我的机器成为很明显的大约400多毫秒。所以我的问题基本上是有没有办法走得更快,提供了相同的结果?瓶颈主要是在numpy.argminnumpy.argmax功能,这比numpy.minnumpy.max慢好位。

答案可能是只是形式#1生活,因为它在视觉上并不重要。或者,也许尝试加快它有点像用Cython(我从来没有用过)。

在Windows上使用Python 3.6.4 FYI ...例如用法是这样的:

x_big = np.linspace(0,10,100000000)
y_big = np.cos(x_big )
x_small, y_small = min_max_downsample(x_big ,y_big ,2000) #Fast but not exactly correct.
x_small, y_small = min_max_downsample_v2(x_big ,y_big ,2000) #correct but not exactly fast.
python python-3.x numpy numba
4个回答
3
投票

我管理通过使用arg(min|max)的直接输出到索引数据数组来获得改进的性能。这是以额外的通话费用以np.sort但轴进行排序仅具有两个元件(最小/最大指数。)和整个阵列是相当小的(二进制数):

def min_max_downsample_v3(x, y, num_bins):
    pts_per_bin = x.size // num_bins

    x_view = x[:pts_per_bin*num_bins].reshape(num_bins, pts_per_bin)
    y_view = y[:pts_per_bin*num_bins].reshape(num_bins, pts_per_bin)
    i_min = np.argmin(y_view, axis=1)
    i_max = np.argmax(y_view, axis=1)

    r_index = np.repeat(np.arange(num_bins), 2)
    c_index = np.sort(np.stack((i_min, i_max), axis=1)).ravel()

    return x_view[r_index, c_index], y_view[r_index, c_index]

我检查了你的例子正时和我得到:

  • min_max_downsample_v1:110毫秒±5毫秒
  • min_max_downsample_v2:240毫秒±8.01毫秒
  • min_max_downsample_v3:164毫秒±1.23毫秒

我还检查了电话后直接返回arg(min|max),结果是同样164毫秒,即有后,没有真正的开销了。


2
投票

因此,这并没有解决加速问题的特定功能,但它确实表明有所有效地绘制一条线有大量点的几种方法。这假定在x点是有序且均匀地(或以均匀地接近)采样。

设定

from pylab import *

下面是我喜欢一个函数,通过随机地选择一个在每个时间间隔减少的点的数量。它不保证显示的数据每一个高峰,但它并没有很多的问题直接抽取数据,并很快。

def calc_rand(y, factor):
    split = y[:len(y)//factor*factor].reshape(-1, factor)
    idx = randint(0, split.shape[-1], split.shape[0])
    return split[arange(split.shape[0]), idx]

而这里的最小值和最大值看到信号包络

def calc_env(y, factor):
    """
    y : 1D signal
    factor : amount to reduce y by (actually returns twice this for min and max)
    Calculate envelope (interleaved min and max points) for y
    """
    split = y[:len(y)//factor*factor].reshape(-1, factor)
    upper = split.max(axis=-1)
    lower = split.min(axis=-1)
    return c_[upper, lower].flatten()

下面的函数可以采取以下任一,并用它们来降低被抽中的数据。实际采取的点数是5000在默认情况下,它应该远远超过监视器的分辨率。它减少后的数据缓存。内存可能是一个问题,尤其是大量的数据,但它不应该超过原始信号所需的时间。

def plot_bigly(x, y, *, ax=None, M=5000, red=calc_env, **kwargs):
    """
    x : the x data
    y : the y data
    ax : axis to plot on
    M : The maximum number of line points to display at any given time
    kwargs : passed to line
    """
    assert x.shape == y.shape, "x and y data must have same shape!"
    if ax is None:
        ax = gca()

    cached = {}

    # Setup line to be drawn beforehand, note this doesn't increment line properties so
    #  style needs to be passed in explicitly
    line = plt.Line2D([],[], **kwargs)
    def update(xmin, xmax):
        """
        Update line data

        precomputes and caches entire line at each level, so initial
        display may be slow but panning and zooming should speed up after that
        """
        # Find nearest power of two as a factor to downsample by
        imin = max(np.searchsorted(x, xmin)-1, 0)
        imax = min(np.searchsorted(x, xmax) + 1, y.shape[0])
        L = imax - imin + 1
        factor = max(2**int(round(np.log(L/M) / np.log(2))), 1)

        # only calculate reduction if it hasn't been cached, do reduction using nearest cached version if possible
        if factor not in cached:
            cached[factor] = red(y, factor=factor)

        ## Make sure lengths match correctly here, by ensuring at least
        #   "factor" points for each x point, then matching y length
        #  this assumes x has uniform sample spacing - but could be modified
        newx = x[imin:imin + ((imax-imin)//factor)* factor:factor]
        start = imin//factor
        newy = cached[factor][start:start + newx.shape[-1]]
        assert newx.shape == newy.shape, "decimation error {}/{}!".format(newx.shape, newy.shape)

        ## Update line data
        line.set_xdata(newx)
        line.set_ydata(newy)

    update(x[0], x[-1])
    ax.add_line(line)
    ## Manually update limits of axis, as adding line doesn't do this
    #   if drawing multiple lines this can quickly slow things down, and some
    #   sort of check should be included to prevent unnecessary changes in limits
    #   when a line is first drawn.
    ax.set_xlim(min(ax.get_xlim()[0], x[0]), max(ax.get_xlim()[1], x[1]))
    ax.set_ylim(min(ax.get_ylim()[0], np.min(y)), max(ax.get_ylim()[1], np.max(y)))

    def callback(*ignore):
        lims = ax.get_xlim()
        update(*lims)

    ax.callbacks.connect('xlim_changed', callback)

    return [line]

下面是一些测试代码

L=int(100e6)
x=linspace(0,1,L)
y=0.1*randn(L)+sin(2*pi*18*x)
plot_bigly(x,y, red=calc_env)

在我的机器这显示非常快。缩放有一点滞后,特别是当它通过大量。平移没有问题。使用随机选择,而不是最小值和最大值是相当快一点,只有对非常高的水平放大的问题。


2
投票

编辑:添加并行= true要numba ......甚至更快

我结束了从@ a_guest的回答,并链接到this related simultaneous min max question使得单通argmin +最大例程和索引功能的混合体。

numba比“快,但不太正确”的版本快其实是有点这个版本返回每个最小/最大y对和感谢正确的x值。

from numba import jit, prange
@jit(parallel=True)
def min_max_downsample_v4(x, y, num_bins):
    pts_per_bin = x.size // num_bins
    x_view = x[:pts_per_bin*num_bins].reshape(num_bins, pts_per_bin)
    y_view = y[:pts_per_bin*num_bins].reshape(num_bins, pts_per_bin)    
    i_min = np.zeros(num_bins,dtype='int64')
    i_max = np.zeros(num_bins,dtype='int64')

    for r in prange(num_bins):
        min_val = y_view[r,0]
        max_val = y_view[r,0]
        for c in range(pts_per_bin):
            if y_view[r,c] < min_val:
                min_val = y_view[r,c]
                i_min[r] = c
            elif y_view[r,c] > max_val:
                max_val = y_view[r,c]
                i_max[r] = c                
    r_index = np.repeat(np.arange(num_bins), 2)
    c_index = np.sort(np.stack((i_min, i_max), axis=1)).ravel()        
    return x_view[r_index, c_index], y_view[r_index, c_index]

2.6倍的速度比较使用timeit速度显示了numba代码大致并提供V1更好的结果。这是不是做numpy的的argmin&argmax串联快一点超过10倍。

%timeit min_max_downsample_v1(x_big ,y_big ,2000)
96 ms ± 2.46 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

%timeit min_max_downsample_v2(x_big ,y_big ,2000)
507 ms ± 4.75 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

%timeit min_max_downsample_v3(x_big ,y_big ,2000)
365 ms ± 1.27 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

%timeit min_max_downsample_v4(x_big ,y_big ,2000)
36.2 ms ± 487 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)

0
投票

你有没有尝试交互式绘图pyqtgraph?它比matplotlib反应更灵敏。

一招我用下采样是使用array_split和计算的最小和最大的分裂。分割是根据每绘图区的像素的样本的数目来完成。

© www.soinside.com 2019 - 2024. All rights reserved.