我正在尝试根据多个图像的平均值创建一个图像。我这样做的方法是遍历2张照片的像素值,将它们加在一起并除以2。简单的数学。但是,在此过程中,它的运行速度非常慢(在最大规格的MacBook Pro 15“ 2016上,平均2x 10MP照片大约需要23秒,相比之下,使用Apple CIFilter API进行类似算法的时间要少得多。)当前使用的是这个,基于另一个StackOverflow问题here:
static func averageImages(primary: CGImage, secondary: CGImage) -> CGImage? {
guard (primary.width == secondary.width && primary.height == secondary.height) else {
return nil
}
let colorSpace = CGColorSpaceCreateDeviceRGB()
let width = primary.width
let height = primary.height
let bytesPerPixel = 4
let bitsPerComponent = 8
let bytesPerRow = bytesPerPixel * width
let bitmapInfo = RGBA32.bitmapInfo
guard let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo) else {
print("unable to create context")
return nil
}
guard let context2 = CGContext(data: nil, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo) else {
print("unable to create context 2")
return nil
}
context.draw(primary, in: CGRect(x: 0, y: 0, width: width, height: height))
context2.draw(secondary, in: CGRect(x: 0, y: 0, width: width, height: height))
guard let buffer = context.data else {
print("Unable to get context data")
return nil
}
guard let buffer2 = context2.data else {
print("Unable to get context 2 data")
return nil
}
let pixelBuffer = buffer.bindMemory(to: RGBA32.self, capacity: width * height)
let pixelBuffer2 = buffer2.bindMemory(to: RGBA32.self, capacity: width * height)
for row in 0 ..< Int(height) {
if row % 10 == 0 {
print("Row: \(row)")
}
for column in 0 ..< Int(width) {
let offset = row * width + column
let picture1 = pixelBuffer[offset]
let picture2 = pixelBuffer2[offset]
let minR = min(255,(UInt32(picture1.redComponent)+UInt32(picture2.redComponent))/2)
let minG = min(255,(UInt32(picture1.greenComponent)+UInt32(picture2.greenComponent))/2)
let minB = min(255,(UInt32(picture1.blueComponent)+UInt32(picture2.blueComponent))/2)
let minA = min(255,(UInt32(picture1.alphaComponent)+UInt32(picture2.alphaComponent))/2)
pixelBuffer[offset] = RGBA32(red: UInt8(minR), green: UInt8(minG), blue: UInt8(minB), alpha: UInt8(minA))
}
}
let outputImage = context.makeImage()
return outputImage
}
struct RGBA32: Equatable {
//private var color: UInt32
var color: UInt32
var redComponent: UInt8 {
return UInt8((color >> 24) & 255)
}
var greenComponent: UInt8 {
return UInt8((color >> 16) & 255)
}
var blueComponent: UInt8 {
return UInt8((color >> 8) & 255)
}
var alphaComponent: UInt8 {
return UInt8((color >> 0) & 255)
}
init(red: UInt8, green: UInt8, blue: UInt8, alpha: UInt8) {
let red = UInt32(red)
let green = UInt32(green)
let blue = UInt32(blue)
let alpha = UInt32(alpha)
color = (red << 24) | (green << 16) | (blue << 8) | (alpha << 0)
}
init(color: UInt32) {
self.color = color
}
static let red = RGBA32(red: 255, green: 0, blue: 0, alpha: 255)
static let green = RGBA32(red: 0, green: 255, blue: 0, alpha: 255)
static let blue = RGBA32(red: 0, green: 0, blue: 255, alpha: 255)
static let white = RGBA32(red: 255, green: 255, blue: 255, alpha: 255)
static let black = RGBA32(red: 0, green: 0, blue: 0, alpha: 255)
static let magenta = RGBA32(red: 255, green: 0, blue: 255, alpha: 255)
static let yellow = RGBA32(red: 255, green: 255, blue: 0, alpha: 255)
static let cyan = RGBA32(red: 0, green: 255, blue: 255, alpha: 255)
static let bitmapInfo = CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.byteOrder32Little.rawValue
static func ==(lhs: RGBA32, rhs: RGBA32) -> Bool {
return lhs.color == rhs.color
}
}
我对使用RAW像素值不是很有经验,可能还有很多优化的余地。可能不需要RGBA32
的声明,但是同样我不确定如何简化代码。我尝试用UInt32替换该结构,但是,当我将其除以2时,四个通道之间的分隔变得混乱,并且我最终得到了错误的结果(肯定的是,这将使计算时间减少到大约6秒)。
我尝试删除alpha通道(仅将其硬编码为255),还删除安全检查以确保没有值超过255。这已将计算时间减少到19秒。但是,距离我希望接近的6秒还很远,平均alpha通道也很不错。
注意:我知道CIFilters;但是,先使图像变暗,然后再使用CIAdditionCompositing
滤镜不起作用,因为Apple提供的API实际上使用的是比直接加法更复杂的算法。有关此内容的更多详细信息,请参见here以获取有关该主题的先前代码,并通过测试证明Apple的API不是直接添加像素值来进行类似的问题here。
**编辑:**感谢所有反馈,我现在可以做出很大的改进。到目前为止,最大的不同是从调试更改为发行版,从而节省了很多时间。然后,我能够编写更快的代码来修改RGBA值,而无需为此使用单独的结构。这将时间从23秒更改为大约10秒(加上调试以发布改进功能)。现在,代码看起来像这样,也被重写了一下,以使其更具可读性:
static func averageImages(primary: CGImage, secondary: CGImage) -> CGImage? {
guard (primary.width == secondary.width && primary.height == secondary.height) else {
return nil
}
let colorSpace = CGColorSpaceCreateDeviceRGB()
let width = primary.width
let height = primary.height
let bytesPerPixel = 4
let bitsPerComponent = 8
let bytesPerRow = bytesPerPixel * width
let bitmapInfo = CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.byteOrder32Little.rawValue
guard let primaryContext = CGContext(data: nil, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo),
let secondaryContext = CGContext(data: nil, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo) else {
print("unable to create context")
return nil
}
primaryContext.draw(primary, in: CGRect(x: 0, y: 0, width: width, height: height))
secondaryContext.draw(secondary, in: CGRect(x: 0, y: 0, width: width, height: height))
guard let primaryBuffer = primaryContext.data, let secondaryBuffer = secondaryContext.data else {
print("Unable to get context data")
return nil
}
let primaryPixelBuffer = primaryBuffer.bindMemory(to: UInt32.self, capacity: width * height)
let secondaryPixelBuffer = secondaryBuffer.bindMemory(to: UInt32.self, capacity: width * height)
for row in 0 ..< Int(height) {
if row % 10 == 0 {
print("Row: \(row)")
}
for column in 0 ..< Int(width) {
let offset = row * width + column
let primaryPixel = primaryPixelBuffer[offset]
let secondaryPixel = secondaryPixelBuffer[offset]
let red = (((primaryPixel >> 24) & 255)/2 + ((secondaryPixel >> 24) & 255)/2) << 24
let green = (((primaryPixel >> 16) & 255)/2 + ((secondaryPixel >> 16) & 255)/2) << 16
let blue = (((primaryPixel >> 8) & 255)/2 + ((secondaryPixel >> 8) & 255)/2) << 8
let alpha = ((primaryPixel & 255)/2 + (secondaryPixel & 255)/2)
primaryPixelBuffer[offset] = red | green | blue | alpha
}
}
print("Done looping")
let outputImage = primaryContext.makeImage()
return outputImage
}
关于多线程,我将多次运行此函数,因此将在函数的迭代中而不是在函数本身中实现多线程。我确实希望从中获得更大的性能提升,但是它还必须与同时在内存中具有更多图像的增加的内存分配相平衡。
感谢所有为此做出贡献的人。由于所有反馈都是通过评论进行的,因此我无法将其中任何一个标记为正确答案。我也不想发布更新的代码作为答案,因为我不是真正做出答案的人。有关如何进行的任何建议?
有一些选择:
将例程并行化:
您可以使用concurrentPerform
提高性能,以将处理移至多个内核。这是最简单的形式,您只需将concurrentPerform
外部循环替换为for
:
concurrentPerform
注意,其他一些观察:
因为您在每个字节上执行相同的计算,所以您可以进一步简化此步骤,摆脱强制转换,移位,掩码等。我还将重复计算移出了内部循环。
因此,我正在使用extension CGImage {
func average(with secondImage: CGImage) -> CGImage? {
guard
width == secondImage.width,
height == secondImage.height
else {
return nil
}
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bytesPerPixel = 4
let bitsPerComponent = 8
let bytesPerRow = bytesPerPixel * width
let bitmapInfo = RGBA32.bitmapInfo
guard
let context1 = CGContext(data: nil, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo),
let context2 = CGContext(data: nil, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo),
let buffer1 = context1.data,
let buffer2 = context2.data
else {
return nil
}
context1.draw(self, in: CGRect(x: 0, y: 0, width: width, height: height))
context2.draw(secondImage, in: CGRect(x: 0, y: 0, width: width, height: height))
let imageBuffer1 = buffer1.bindMemory(to: UInt8.self, capacity: width * height * 4)
let imageBuffer2 = buffer2.bindMemory(to: UInt8.self, capacity: width * height * 4)
DispatchQueue.concurrentPerform(iterations: height) { row in // i.e. a parallelized version of `for row in 0 ..< height {`
var offset = row * bytesPerRow
for _ in 0 ..< bytesPerRow {
offset += 1
let byte1 = imageBuffer1[offset]
let byte2 = imageBuffer2[offset]
imageBuffer1[offset] = byte1 / 2 + byte2 / 2
}
}
return context1.makeImage()
}
}
类型并遍历UInt8
。
FWIW,我已将其定义为bytesPerRow
扩展名,其调用方式为:]
CGImage
现在,我们逐步遍历像素阵列中的像素。您可以进行实际更改,以每次let combinedImage = image1.average(with: image2)
迭代处理多个像素,尽管这样做时我没有看到实质性的变化。
[我发现concurrentPerform
比非并行concurrentPerform
循环快许多倍。不幸的是,嵌套的for
循环仅占整个函数总处理时间的一小部分(例如,一旦包含构建这两个像素缓冲区的开销,整体性能仅比未优化的渲染快40%) 。在规格良好的MBP 2018上,它可以在半秒钟内处理10,000×10,000 px的图像。
另一个选择是Accelerate for
库。
该库提供了多种图像处理例程,如果您要处理大图像,这是一个很好的库,可以使您熟悉。我不知道其vImage算法在数学上是否与“平均字节值”算法相同,但对于您的目的而言可能就足够了。它的优点是可以通过单个API调用减少嵌套的alpha compositing循环。这也为更多类型的图像合成和处理例程打开了大门:
for
无论如何,我发现这里的性能类似于extension CGImage {
func averageVimage(with secondImage: CGImage) -> CGImage? {
let bitmapInfo: CGBitmapInfo = [.byteOrder32Little, CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)]
let colorSpace = CGColorSpaceCreateDeviceRGB()
guard
width == secondImage.width,
height == secondImage.height,
let format = vImage_CGImageFormat(bitsPerComponent: 8, bitsPerPixel: 32, colorSpace: colorSpace, bitmapInfo: bitmapInfo)
else {
return nil
}
guard var sourceBuffer = try? vImage_Buffer(cgImage: self, format: format) else { return nil }
defer { sourceBuffer.free() }
guard var sourceBuffer2 = try? vImage_Buffer(cgImage: secondImage, format: format) else { return nil }
defer { sourceBuffer2.free() }
guard var destinationBuffer = try? vImage_Buffer(width: width, height: height, bitsPerPixel: 32) else { return nil }
defer { destinationBuffer.free() }
guard vImagePremultipliedConstAlphaBlend_ARGB8888(&sourceBuffer, Pixel_8(127), &sourceBuffer2, &destinationBuffer, vImage_Flags(kvImageNoFlags)) == kvImageNoError else {
return nil
}
return try? destinationBuffer.createCGImage(format: format)
}
}
算法。
对于傻笑和笑容,我还尝试使用concurrentPerform
渲染图像,并使用BLAS CGBitmapInfo.floatComponents
进行单线调用以平均两个矢量。它运行良好,但是,毫不奇怪,它比上面的基于整数的例程要慢。