Pytorch中的螺旋卷积(机器学习)

问题描述 投票:0回答:1

我目前正在研究有效涉及多达5或6维数组的卷积神经网络的开发。

我知道,用于卷积神经网络的许多工具并没有真正处理ND卷积,因此,我决定尝试编写Helix卷积的实现,从而可以将卷积视为大型一维卷积(请参见参考1. http://sepwww.stanford.edu/public/docs/sep95/jon1/paper_html/node2.html,参考2 https://sites.ualberta.ca/~mostafan/Files/Papers/md_convolution_TLE2009.pdf,以获取有关该概念的更多详细信息。

我是在(可能是错误的)假设下进行此操作的,即在GPU上,大型一维卷积可能比多维卷积更容易,而且该方法可以轻松扩展到N个维。

特别是参考文献2中的引言:

我们尚未发现N-D标准卷积与使用文中描述的算法。但是,我们发现使用以下内容编写用于地震数据正则化的代码技巧导致算法可以轻松处理正则化任何数量的空间尺寸问题(Naghizadehand Sacchi,2009)。

我已经在下面编写了函数的实现,与signal.fftconvolve相比。与该功能相比,它在CPU上的运行速度较慢,但​​我仍然希望了解它在PyTorch中作为前向卷积层在GPU上的性能。

有人可以帮助我将此代码移植到PyTorch,以便我可以验证其行为吗?

"""
HELIX CONVOLUTION FUNCTION

Shrink:
CROPS THE SIZE OF THE CONVOLVED SIGNAL DOWN TO THE ORIGINAL SIZE OF THE ORIGINAL. 

Pad:
PADS THE DIFFERENCE BETWEEN THE ORIGINAL SHAPE AND THE DESIRED, CONVOLVED SHAPE FOR KERNEL AND SIGNAL.

GetLength:
EXTRACTS THE LENGTH OF THE UNWOUND STRIP OF THE SIGNAL AND KERNEL THAT IS TO BE CONVOLVED.

FFTConvolve:
USES THE NUMPY FFT PACKAGE TO PERFORM FAST FOURIER CONVOLUTION ON THE SIGNALS 

Convolve:
USES HELIX CONVOLUTION ON AN INPUT ARRAY AND KERNEL. 

"""

import numpy as np
from numpy import *
from scipy import signal
import operator
import time


class HelixCPU:
    @classmethod
    def Shrink(cls,array, bounding):
       start = tuple(map(lambda a, da: (a-da)//2, array.shape, bounding))
       end = tuple(map(operator.add, start, bounding))
       slices = tuple(map(slice, start, end))
       return array[slices]

    @classmethod
    def Pad(cls,array, target_shape):
       diff = target_shape-array.shape
       padder=[(0,val) for val in diff]
       padded = np.pad(array, padder, 'constant')
       return padded

    @classmethod
    def GetLength(cls,array_shape, padded_shape):
        temp=1
        steps=np.zeros_like(array_shape)

        for i, entry in enumerate(padded_shape[::-1]):
            if(i==len(padded_shape)-1):
               steps[i]=1
            else:
               temp=entry*temp
               steps[i]=temp

         steps=np.roll(steps, 1)
         steps=steps[::-1]
         ones=np.ones_like(array_shape)
         ones[-1]=0
         out=np.multiply(steps,array_shape - ones)
         length = np.sum(out)
         return length

    @classmethod
    def FFTConvolve(cls, in1, in2, len1, len2):
        s1 = len1
        s2 = len2
        shape = s1 + s2 - 1
        fsize = 2 ** np.ceil(cp.log2(shape)).astype(int) 
        fslice = slice(0, shape)
        conv = np.fft.ifft(np.fft.fft(in1, int(fsize)) * np.fft.fft(in2, int(fsize)))[fslice].copy()
        return conv

    @classmethod
    def Convolve(cls,array, kernel):
        m = array.shape
        n = kernel.shape
        mn = np.add(m, n)
        mn = mn-np.ones_like(mn)
        k_pad=cls.Pad(kernel, mn)
        a_pad=cls.Pad(array, mn)
        length_k = cls.GetLength(kernel.shape, k_pad.shape);
        length_a = cls.GetLength(array.shape, a_pad.shape);
        k_flat = k_pad.flatten()[0:length_k]
        a_flat = a_pad.flatten()[0:length_a]
        conv = cls.FFTConvolve(a_flat, k_flat)
        conv = np.resize(conv,mn)
        conv = cls.Shrink(conv, m)
        return conv



def main():

    array=np.random.rand(25,25,41,51)
    kernel=np.random.rand(10, 10, 10, 10)

    start2 =time.process_time()
    test2 = HelixCPU.Convolve(array, kernel)
    end2=time.process_time()

    start1= time.process_time()
    test1 = signal.fftconvolve(array, kernel, "same")
    end1= time.process_time()

    print ("")
    print ("========================")
    print ("SOME LARGE CONVOLVED RANDOM ARRAYS. ")
    print ("========================")
    print("")
    print ("Random Calorimeter Image of Size {0} Created".format(array.shape))
    print ("Random Kernel of Size {0} Created".format(kernel.shape))
    print("")
    print ("Value\tOriginal\tHelix")
    print ("Time Taken [s]\t{0}\t{1}\t{2}".format( (end1-start1), (end2-start2), (end2-start2)/(end1-start1) ))
    print ("Maximum Value\t{:03.2f}\t{:13.2f}".format( np.max(test1), np.max(test2) ))
    print ("Matrix Norm \t{:03.2f}\t{:13.2f}".format( np.linalg.norm(test1), np.linalg.norm(test2) ))
    print ("All Close?\t{0}".format(np.allclose(test1, test2)))
python pytorch conv-neural-network convolution helix
1个回答
0
投票

对不起,由于回复率低,我无法添加评论,所以我问我一个问题,希望可以回答你的问题。

通过螺旋卷积,您的意思是将卷积运算定义为单矩阵乘法吗?如果是这样,我过去曾尝试过这种方法,但实际上它在内存方面效率很低。

© www.soinside.com 2019 - 2024. All rights reserved.