NumPy 可以替换这些列表推导式以使其运行得更快吗?

问题描述 投票:0回答:1

这个矩阵数学可以更快地完成吗?

我正在使用 Python 以相对透视方式渲染 3D 点。速度很重要,因为它会在接下来的某个时刻直接转化为帧速率。

我尝试使用 NumPy 函数,但无法清除两个讨厌的列表理解。我的程序 90% 的运行时间都在列表推导期间,这是有道理的,因为它们包含所有数学,所以如果可能的话,我想找到一种更快的方法。

  1. 第一个列表理解发生在制作
    pos
    时 - 它做了一个 每行的和与矩阵乘法
    vert_array
  2. 第二个,
    persp
    ,将 x 和 y 值相乘 每行基于该特定行的 z 值。

可以用 NumPy 中的内容替换这些列表推导式吗?我读到了有关 numpy.einsum 和 numpy.fromfunction 的内容,但我很难理解它们是否与我的问题相关。

这是执行主要渲染计算的函数:

我想让

pos
persp
更快:

import time
from random import randint
import numpy as np

def render_all_verts(vert_array):
    """
    :param vert_array: a 2-dimensional numpy array of float32 values and
        size 3 x n, formatted as follows, where each row represents one
        vertex's coordinates in world-space coordinates:
        [[vert_x_1, vert_y_1, vert_z_1],
         [vert_x_2, vert_y_2, vert_z_2],
         ...
         [vert_x_n, vert_y, vert_z]]
    :return: a 2-dimensional numpy array of the same data type, size
        and format as vert_array, but in screen-space coordinates
    """
    # Unit Vector is a 9 element, 2D array that represents the rotation matrix
    # for the camera after some rotation (there's no rotation in this example)
    unit_vec = np.array([[1, 0, 0],
                         [0, 1, 0],
                         [0, 0, 1]], dtype='float32')

    # Raw Shift is a 3 element, 1D array that represents the position
    # vector (x, y, z) of the camera in world-space coordinates
    shift = np.array([0, 0, 10], dtype='float32')
    
    # PURPOSE: This converts vert_array, with its coordinates relative
    #   to the world-space axes and origin, into coordinates relative
    #   to camera-space axes and origin (at the camera).
    # MATH DESCRIPTION: For every row, raw_shift is added, then matrix
    #   multiplication is performed with that sum (1x3) and unit_array (3x3).
    pos = np.array([np.matmul(unit_vec, row + shift) for row in vert_array], dtype='float32')
    # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    # This is a constant used to solve for the perspective
    focus = 5
    # PURPOSE: This calculation does the math to change the vertex coordinates,
    #   which are relative to the camera, into a representation of how they'll
    #   appear on screen in perspective. The x and y values are scaled based on
    #   the z value (distance from the camera)
    # MATH DESCRIPTION: Each row's first two columns are multiplied
    #   by a scalar, which is derived from that row's third column value.
    persp = np.array([np.multiply(row, np.array([focus / abs(row[2]), focus / abs(row[2]), 1])) for row in pos])
    # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    
    return persp

我编写此代码是为了计时

render_all_verts
并生成随机顶点坐标列表以重复运行它。

# TESTING RENDERING SPEED
start_time = time.time()

# The next few lines make an array, similar to the 3D points I'd be rendering.
# It contains n vertices with random coordinate values from -m to m
n = 1000
m = 50
example_vertices = np.array([(randint(-m, m), randint(-m, m), randint(-m, m)) for i in range(n)])
# This empty array, which is the same shape as example_vertices. The results are saved here.
rendered_verts = np.empty(example_vertices.shape)

print('Example vertices:\n', example_vertices)

# This loop will render the example vertices many times
render_times = 2000
for i in range(render_times):
    rendered_verts = render_all_verts(example_vertices)
    
print('\n\nLast calculated render of vertices:\n', rendered_verts)

print(f'\n\nThis program took an array of {n} vertices with randomized coordinate')
print(f'values between {-m} and {m} and rendered them {render_times} times.')
print(f'--- {time.time() - start_time} seconds ---')

最后,这是终端输出的一个实例:

C:\...\simplified_demo.py 
Example vertices:
 [[-45   4 -43]
 [ 42  27  28]
 [-33  24 -18]
 ...
 [ -5  48   5]
 [-17 -17  29]
 [ -5 -46 -24]]
C:\...\simplified_demo.py:45: RuntimeWarning: divide by zero encountered in divide
  persp = np.array([np.multiply(row, np.array([focus / abs(row[2]), focus / abs(row[2]), 1]))


Last calculated render of vertices:
 [[ -6.81818182   0.60606061 -33.        ]
 [  5.52631579   3.55263158  38.        ]
 [-20.625       15.          -8.        ]
 ...
 [ -1.66666667  16.          15.        ]
 [ -2.17948718  -2.17948718  39.        ]
 [ -1.78571429 -16.42857143 -14.        ]]


This program took an array of 1000 vertices with randomized coordinate
values between -50 and 50 and rendered them 2000 times.
--- 15.910243272781372 seconds ---

Process finished with exit code 0

附注NumPy 目前似乎可以很好地处理除以零和溢出值,所以我不担心运行时警告。我用...替换了我的文件路径

P.P.S。是的,我知道我可以使用 OpenGL 或任何其他现有的渲染引擎来处理所有这些数学问题,但我更感兴趣的是重新发明这个轮子。对我来说,学习 Python 和 NumPy 主要是一个实验。

python algorithm numpy graphics linear-algebra
1个回答
0
投票

可以通过使用矢量化来实现初始加速


def render_all_verts(vert_array):
    """
    :param vert_array: a 2-dimensional numpy array of float32 values and
        size 3 x n, formatted as follows, where each row represents one
        vertex's coordinates in world-space coordinates:
        [[vert_x_1, vert_y_1, vert_z_1],
         [vert_x_2, vert_y_2, vert_z_2],
         ...
         [vert_x_n, vert_y, vert_z]]
    :return: a 2-dimensional numpy array of the same data type, size
        and format as vert_array, but in screen-space coordinates
    """
    # Unit Vector is a 9 element, 2D array that represents the rotation matrix
    # for the camera after some rotation (there's no rotation in this example)
    unit_vec = np.array([[2, 0, 0],
                         [0, 1, 0],
                         [0, 0, 1]], dtype='float32')

    # Raw Shift is a 3 element, 1D array that represents the position
    # vector (x, y, z) of the camera in world-space coordinates
    shift = np.array([0, 0, 10], dtype='float32')
    
    # PURPOSE: This converts vert_array, with its coordinates relative
    #   to the world-space axes and origin, into coordinates relative
    #   to camera-space axes and origin (at the camera).
    # MATH DESCRIPTION: For every row, raw_shift is added, then matrix
    #   multiplication is performed with that sum (1x3) and unit_array (3x3).
    pos2 = np.matmul(unit_vec, (vert_array + shift).T).T
    """
    pos = np.array([np.matmul(unit_vec, row + shift) for row in vert_array], dtype='float32')
    print(vert_array.shape, unit_vec.shape)
    assert pos2.shape == pos.shape, (pos2.shape, pos.shape)
    assert np.all(pos2 == pos), np.sum(pos - pos2)
    """
    # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    # This is a constant used to solve for the perspective
    focus = 5
    # PURPOSE: This calculation does the math to change the vertex coordinates,
    #   which are relative to the camera, into a representation of how they'll
    #   appear on screen in perspective. The x and y values are scaled based on
    #   the z value (distance from the camera)
    # MATH DESCRIPTION: Each row's first two columns are multiplied
    #   by a scalar, which is derived from that row's third column value.
    x = focus / np.abs(pos2[:,2])
    persp2 = np.multiply(pos2, np.dstack([x, x, np.ones(x.shape)]))
    """
    persp = np.array([np.multiply(row, np.array([focus / abs(row[2]), focus / abs(row[2]), 1])) for row in pos2])
    assert persp.shape == persp2.shape, (persp.shape, persp2.shape)
    assert np.all(persp == persp2), np.sum(persp - persp2)
    """
    # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    
    return persp2
© www.soinside.com 2019 - 2024. All rights reserved.