Python - 快速批量修改PNG

问题描述 投票:0回答:1

我编写了一个 python 脚本,以独特的方式为 OpenGL 着色器组合图像。问题是我有大量非常大的地图,需要很长时间来处理。有没有办法更快地写这个?

import numpy as np

map_data = {}
image_data = {}
for map_postfix in names:
file_name = inputRoot + '-' + map_postfix + resolution + '.png'
print 'Loading ' + file_name
image_data[map_postfix] = Image.open(file_name, 'r')
map_data[map_postfix] = image_data[map_postfix].load()


color = mapData['ColorOnly']
ambient = mapData['AmbientLight']
shine = mapData['Shininess']

width = imageData['ColorOnly'].size[0]
height = imageData['ColorOnly'].size[1]

arr = np.zeros((height, width, 4), dtype=int)

for i in range(width):
    for j in range(height):
        ambient_mod = ambient[i,j][0] / 255.0
        arr[j, i, :] = [color[i,j][0] * ambient_mod , color[i,j][1] * ambient_mod , color[i,j][2] * ambient_mod , shine[i,j][0]]

print 'Converting Color Map to image'
return Image.fromarray(arr.astype(np.uint8))

这只是大量批处理过程的一个示例,因此我更感兴趣的是是否有更快的方法来迭代和修改图像文件。几乎所有时间都花在嵌套循环上而不是加载和保存上。

python image-processing python-imaging-library
1个回答
1
投票

矢量化代码示例 - 在

timeit
zmq.Stopwatch()

中测试对您的效果

据报道有 22.14 秒 >> 0.1624 秒加速!

虽然您的代码似乎只是在 RGBA[x,y] 上循环,但让我展示一个代码的“向量化”语法,它受益于

numpy
矩阵操作实用程序(忘记 RGB/YUV 操作) (最初基于 OpenCV 而不是 PIL),但是重新使用矢量化语法方法以避免 for 循环并使其适应您的微积分有效工作。错误的操作顺序可能会使您的处理时间增加一倍以上。

并使用测试/优化/重新测试循环来加速。

为了测试,如果

timeit
分辨率足够的话,请使用标准 python
[msec]

如果您需要解决

zmq.StopWatch()
问题,请选择
[usec]

# Vectorised-code example, to see the syntax & principles
#                          do not mind another order of RGB->BRG layers
#                          it has been OpenCV traditional convention
#                          it has no other meaning in this demo of VECTORISED code

def get_YUV_U_Cb_Rec709_BRG_frame( brgFRAME ):  # For the Rec. 709 primaries used in gamma-corrected sRGB, fast, VECTORISED MUL/ADD CODE
    out =  numpy.zeros(            brgFRAME.shape[0:2] )
    out -= 0.09991 / 255 *         brgFRAME[:,:,1]  # // Red
    out -= 0.33601 / 255 *         brgFRAME[:,:,2]  # // Green
    out += 0.436   / 255 *         brgFRAME[:,:,0]  # // Blue
    return out
# normalise to <0.0 - 1.0> before vectorised MUL/ADD, saves [usec] ...
# on 480x640 [px] faster goes about 2.2 [msec] instead of 5.4 [msec]

在您的情况下,使用

dtype = numpy.int
,猜测首先通过
MUL
ambient[:,:,0]
,最后通过
DIV
标准化
arr[:,:,:3] /= 255

会更快
# test if this goes even faster once saving the vectorised overhead on matrix DIV
arr[:,:,0] = color[:,:,0] * ambient[:,:,0] / 255  # MUL remains INT, shall precede DIV
arr[:,:,1] = color[:,:,1] * ambient[:,:,0] / 255  # 
arr[:,:,2] = color[:,:,2] * ambient[:,:,0] / 255  # 
arr[:,:,3] = shine[:,:,0]                         # STO alpha

那么它在你的算法中看起来怎么样?

人们不需要彼得·杰克逊那样令人印象深刻的预算和时间,他在新西兰的机库里计划、跨越并执行了三年多的大量数字运算,当时他正在制作“The Lord of The Lord”,那里挤满了一群 SGI 工作站。 Rings”全数字母带制作流水线,通过逐帧像素操作,实现量产流水线中毫秒、微秒甚至纳秒的重要性。

因此,深呼吸并进行测试和重新测试,以便将现实世界的图像处理性能优化到项目需要的水平。

希望这对您有所帮助:

# OPTIONAL for performance testing -------------# ||||||||||||||||||||||||||||||||
from zmq import Stopwatch                       # _MICROSECOND_ timer
#                                               # timer-resolution step ~ 21 nsec
#                                               # Yes, NANOSECOND-s
# OPTIONAL for performance testing -------------# ||||||||||||||||||||||||||||||||
arr        = np.zeros( ( height, width, 4 ), dtype = int )
aStopWatch = zmq.Stopwatch()                    # ||||||||||||||||||||||||||||||||
# /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\# <<< your original code segment          
#  aStopWatch.start()                           # |||||||||||||__.start
#  for i in range(     width  ):
#      for j in range( height ):
#          ambient_mod  = ambient[i,j][0] / 255.0
#          arr[j, i, :] = [ color[i,j][0] * ambient_mod, \
#                           color[i,j][1] * ambient_mod, \
#                           color[i,j][2] * ambient_mod, \
#                           shine[i,j][0]                \
#                           ]
#  usec_for = aStopWatch.stop()                 # |||||||||||||__.stop
#  print 'Converting Color Map to image'
#  print '           FOR processing took ', usec_for, ' [usec]'
# /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\# <<< proposed alternative
aStopWatch.start()                              # |||||||||||||__.start
# reduced numpy broadcasting one dimension less # ref. comments below
arr[:,:, 0]  = color[:,:,0] * ambient[:,:,0]    # MUL ambient[0]  * [{R}]
arr[:,:, 1]  = color[:,:,1] * ambient[:,:,0]    # MUL ambient[0]  * [{G}]
arr[:,:, 2]  = color[:,:,2] * ambient[:,:,0]    # MUL ambient[0]  * [{B}]
arr[:,:,:3] /= 255                              # DIV 255 to normalise
arr[:,:, 3]  = shine[:,:,0]                     # STO shine[  0] in [3]
usec_Vector  = aStopWatch.stop()                # |||||||||||||__.stop
print 'Converting Color Map to image'
print '           Vectorised processing took ', usec_Vector, ' [usec]'
return Image.fromarray( arr.astype( np.uint8 ) )
© www.soinside.com 2019 - 2024. All rights reserved.