Opencv图像拼接混合(Multiband blending)

问题描述 投票:0回答:2

我正在尝试使用 OpenCV 3.2 中的 OpenCV 3.2 中的搅拌器将我刚刚缝合在一起的图像中的接缝混合在

cv::detail::MultiBandBlender
中找到的
#include "opencv2/stitching/detail/blenders.hpp"
中。我能找到的文档不多,编码示例更少,但我设法找到了一个很好的博客来帮助解释这些步骤 here

当我运行代码时,我得到以下内容

error:/opencv/modules/core/src/copy.cpp:1176: error: (-215) top >= 0 && bottom >= 0 && left >= 0 && right >= 0 in function copyMakeBorder

这里是用于混合的代码(假设拼接、warpPerspective 和发现的单应性是正确的)

//Mask of iamge to be combined so you can get resulting mask
Mat mask1(image1.size(), CV_8UC1, Scalar::all(255));
Mat mask2(image2.size(), CV_8UC1, Scalar::all(255));
Mat image1Updated, image2Updated;
//Warp the masks and the images to their new posistions so their are of all the same  size to be overlayed and blended
warpPerspective(image1, image1Updated, (translation*homography), result.size(), INTER_LINEAR, BORDER_CONSTANT,(0));
warpPerspective(image2, image2Updated, translation, result.size(), INTER_LINEAR, BORDER_TRANSPARENT,   (0));
warpPerspective(mask1, mask1, (translation*homography), result.size(), INTER_LINEAR, BORDER_CONSTANT,(0));
warpPerspective(mask2, mask2, translation, result.size(), INTER_LINEAR, BORDER_TRANSPARENT,   (0));

//create blender
detail::MultiBandBlender blender(false, 5);
//feed images and the mask areas to blend
blender.feed(image1Updated, mask1, Point2f (0,0));
blender.feed(image2Updated, mask2, Point2f (0,0));
//prepare resulting size of image
blender.prepare(Rect(0, 0, result.size().width, result.size().height));
Mat result_s, result_mask;
//blend
blender.blend(result_s, result_mask);

我尝试做的时候出现错误

blender.feed

一点旁注;在为搅拌机制作蒙版时,蒙版应该是整个图像还是只是图像在拼接过程中相互重叠的区域?

提前感谢您的帮助

编辑

我有它的工作,但现在正在得到这个结果混合成像。 这是没有混合的拼接图像以供参考。 关于如何改进的任何想法?

c++ opencv blending image-stitching
2个回答
0
投票
  1. 在 blender.feed 之前使用 blender.prepare
  2. 重新设计你的面具(一半255一半0)
//Mask of the image to be combined so you can get resulting mask
Mat mask1, mask2;
mask1 = optimalSeamMask(energy, path);
mask2 = ones(mask1.rows, mask1.cols)*255-mask1

Mat image1Updated, image2Updated;
//Warp the masks and the images to their new posistions so their are of all the same  size to be overlayed and blended
warpPerspective(image1, image1Updated, (translation*homography), result.size(), INTER_LINEAR, BORDER_CONSTANT,(0));
warpPerspective(image2, image2Updated, translation, result.size(), INTER_LINEAR, BORDER_TRANSPARENT,   (0));
warpPerspective(mask1, mask1, (translation*homography), result.size(), INTER_LINEAR, BORDER_CONSTANT,(0));
warpPerspective(mask2, mask2, translation, result.size(), INTER_LINEAR, BORDER_TRANSPARENT,   (0));

//create blender
detail::MultiBandBlender blender(false, 5);
//feed images and the mask areas to blend
blender.prepare(Rect(0, 0, result.size().width, result.size().height));
blender.feed(image1Updated, mask1, Point2f (0,0));
blender.feed(image2Updated, mask2, Point2f (0,0));
//prepare resulting size of image
Mat result_s, result_mask;
//blend
blender.blend(result_s, result_mask);

0
投票

这是旧的,但我找到了问题的原因并会分享它以防有人遇到同样的问题,问题出在 warpPerspective 方法中,扭曲图像周围会有一些黑色像素,所以你必须转换

warpPerspective(image1, image1Updated, (translation*homography), result.size(), INTER_LINEAR, BORDER_CONSTANT,(0));
warpPerspective(image2, image2Updated, translation, result.size(), INTER_LINEAR, BORDER_TRANSPARENT,   (0));
warpPerspective(mask1, mask1, (translation*homography), result.size(), INTER_LINEAR, BORDER_CONSTANT,(0));
warpPerspective(mask2, mask2, translation, result.size(), INTER_LINEAR, BORDER_TRANSPARENT,   (0));

warpPerspective(image1, image1Updated, (translation*homography), result.size(), INTER_LINEAR, BORDER_REPLICATE);
warpPerspective(image2, image2Updated, translation, result.size(), INTER_LINEAR, BORDER_REPLICATE);
warpPerspective(mask1, mask1, (translation*homography), result.size());
warpPerspective(mask2, mask2, translation, result.size());

这将用最接近它的像素替换扭曲图像周围的所有黑色区域。

© www.soinside.com 2019 - 2024. All rights reserved.