使用OpenCV 2.2筛选实现

问题描述 投票:32回答:6

有人知道SIFT实现与OpenCV 2.2的示例链接。问候,

opencv sift
6个回答
33
投票

以下是一个最小的例子:

#include <opencv/cv.h>
#include <opencv/highgui.h>

int main(int argc, const char* argv[])
{
    const cv::Mat input = cv::imread("input.jpg", 0); //Load as grayscale

    cv::SiftFeatureDetector detector;
    std::vector<cv::KeyPoint> keypoints;
    detector.detect(input, keypoints);

    // Add results to image and save.
    cv::Mat output;
    cv::drawKeypoints(input, keypoints, output);
    cv::imwrite("sift_result.jpg", output);

    return 0;
}

在OpenCV 2.3上测试过


30
投票

您可以通过多种方式获得SIFT检测器和基于SIFT的提取器。正如其他人已经提出了更直接的方法,我将提供更多的“软件工程”方法,使您可以使代码更灵活地进行更改(即更容易更改为其他检测器和提取器)。

首先,如果您希望使用内置参数获取检测器,最好的方法是使用OpenCV的工厂方法来创建它。具体如下:

#include <opencv2/core/core.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/highgui/highgui.hpp>

#include <vector>

using namespace std;
using namespace cv;

int main(int argc, char *argv[])
{        
  Mat image = imread("TestImage.jpg");

  // Create smart pointer for SIFT feature detector.
  Ptr<FeatureDetector> featureDetector = FeatureDetector::create("SIFT");
  vector<KeyPoint> keypoints;

  // Detect the keypoints
  featureDetector->detect(image, keypoints); // NOTE: featureDetector is a pointer hence the '->'.

  //Similarly, we create a smart pointer to the SIFT extractor.
  Ptr<DescriptorExtractor> featureExtractor = DescriptorExtractor::create("SIFT");

  // Compute the 128 dimension SIFT descriptor at each keypoint.
  // Each row in "descriptors" correspond to the SIFT descriptor for each keypoint
  Mat descriptors;
  featureExtractor->compute(image, keypoints, descriptors);

  // If you would like to draw the detected keypoint just to check
  Mat outputImage;
  Scalar keypointColor = Scalar(255, 0, 0);     // Blue keypoints.
  drawKeypoints(image, keypoints, outputImage, keypointColor, DrawMatchesFlags::DEFAULT);

  namedWindow("Output");
  imshow("Output", outputImage);

  char c = ' ';
  while ((c = waitKey(0)) != 'q');  // Keep window there until user presses 'q' to quit.

  return 0;

}

使用工厂方法的原因很灵活,因为现在您可以更改为不同的关键点检测器或功能提取器,例如SURF只需通过更改传递给“create”工厂方法的参数,如下所示:

Ptr<FeatureDetector> featureDetector = FeatureDetector::create("SURF");
Ptr<DescriptorExtractor> featureExtractor = DescriptorExtractor::create("SURF");

有关传递以创建其他检测器或提取器的其他可能参数,请参阅:http://opencv.itseez.com/modules/features2d/doc/common_interfaces_of_feature_detectors.html#featuredetector-create

http://opencv.itseez.com/modules/features2d/doc/common_interfaces_of_descriptor_extractors.html?highlight=descriptorextractor#descriptorextractor-create

现在,使用工厂方法意味着您可以方便地不必猜测一些合适的参数传递给每个探测器或提取器。这对于刚接触它们的人来说非常方便。但是,如果要创建自己的自定义SIFT检测器,可以将使用自定义参数创建的SiftDetector对象包装并将其包装到智能指针中,并使用上面的featureDetector智能指针变量引用它。


6
投票

在opencv 2.4中使用SIFT非自由特征检测器的简单示例

#include <opencv2/opencv.hpp>
#include <opencv2/nonfree/nonfree.hpp>
using namespace cv;

int main(int argc, char** argv)
{

    if(argc < 2)
        return -1;

    Mat img = imread(argv[1]);

    SIFT sift;
    vector<KeyPoint> key_points;

    Mat descriptors;
    sift(img, Mat(), key_points, descriptors);

    Mat output_img;
    drawKeypoints(img, key_points, output_img);

    namedWindow("Image");
    imshow("Image", output_img);
    waitKey(0);
    destroyWindow("Image");

    return 0;
}

5
投票

OpenCV提供开箱即用的SIFTSURFhere too)和其他功能描述符。 请注意,SIFT算法已获得专利,因此可能与常规OpenCV使用/许可证不兼容。


3
投票

另一个在opencv 2.4中使用SIFT非自由特征检测器的简单示例请务必添加opencv_nonfree240.lib依赖项

#include "cv.h"
#include "highgui.h"
#include <opencv2/nonfree/nonfree.hpp>

int main(int argc, char** argv)
{
   cv::Mat img = cv::imread("image.jpg");

   cv::SIFT sift(10);   //number of keypoints

   cv::vector<cv::KeyPoint> key_points;

   cv::Mat descriptors, mascara;
   cv::Mat output_img;

   sift(img,mascara,key_points,descriptors);
   drawKeypoints(img, key_points, output_img);

   cv::namedWindow("Image");
   cv::imshow("Image", output_img);
   cv::waitKey(0);

   return 0;
}

0
投票

如果有人想知道如何用2张图片做到这一点:

import numpy as np
import cv2

print ('Initiate SIFT detector')
sift = cv2.xfeatures2d.SIFT_create()
print ('find the keypoints and descriptors with SIFT')
gcp1, des1 = sift.detectAndCompute(src_img,None)
gcp2, des2 = sift.detectAndCompute(trg_img,None)

# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)


matches = bf.match(des1,des2)
# Sort them in the order of their distance.
matches = sorted(matches, key = lambda x:x.distance)

#print only the first 100 matches
img3 = drawMatches(src_img, gcp1, trg_img, gcp2, matches[:100])
© www.soinside.com 2019 - 2024. All rights reserved.