眼镜检测

问题描述 投票:0回答:3

我想做的是测量眼镜框的厚度。我有一个想法来测量框架轮廓的厚度(可能是更好的方法?)。到目前为止,我已经勾勒出眼镜框的轮廓,但线条不相交处存在间隙。我考虑过使用 HoughLinesP,但我不确定这是否是我需要的。

到目前为止,我已执行以下步骤:

  • 将图像转换为灰度
  • 在眼睛/眼镜区域周围创建投资回报率
  • 模糊图像
  • 放大图像(这样做是为了去除任何薄框眼镜)
  • 进行 Canny 边缘检测
  • 找到轮廓

这些是结果:

这是我到目前为止的代码:

//convert to grayscale
cv::Mat grayscaleImg;
cv::cvtColor( img, grayscaleImg, CV_BGR2GRAY );

//create ROI
cv::Mat eyeAreaROI(grayscaleImg, centreEyesRect);
cv::imshow("roi", eyeAreaROI);

//blur
cv::Mat blurredROI;
cv::blur(eyeAreaROI, blurredROI, Size(3,3));
cv::imshow("blurred", blurredROI);

//dilate thin lines
cv::Mat dilated_dst;
int dilate_elem = 0;
int dilate_size = 1;
int dilate_type = MORPH_RECT;

cv::Mat element = getStructuringElement(dilate_type, 
    cv::Size(2*dilate_size + 1, 2*dilate_size+1), 
    cv::Point(dilate_size, dilate_size));

cv::dilate(blurredROI, dilated_dst, element);
cv::imshow("dilate", dilated_dst);

//edge detection
int lowThreshold = 100;
int ratio = 3;
int kernel_size = 3;    

cv::Canny(dilated_dst, dilated_dst, lowThreshold, lowThreshold*ratio, kernel_size);

//create matrix of the same type and size as ROI
Mat dst;
dst.create(eyeAreaROI.size(), dilated_dst.type());
dst = Scalar::all(0);

dilated_dst.copyTo(dst, dilated_dst);
cv::imshow("edges", dst);

//join the lines and fill in
vector<Vec4i> hierarchy;
vector<vector<Point>> contours;

cv::findContours(dilated_dst, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
cv::imshow("contours", dilated_dst);

我不完全确定下一步是什么,或者正如我上面所说,我是否应该使用 HoughLinesP 以及如何实现它。非常感谢任何帮助!

c++ opencv image-processing hough-transform canny-operator
3个回答
4
投票

我认为有两个主要问题。

  1. 分割眼镜框

  2. 求分段框架的厚度

我现在将发布一种分割示例图像的眼镜的方法。也许这种方法也适用于不同的图像,但您可能需要调整参数,或者您可能能够使用主要思想。

主要思想是: 首先,找到图像中最大的轮廓,这应该是眼镜。其次,在之前找到的最大轮廓中找到两个最大的轮廓,这应该是镜框内的眼镜!

我使用此图像作为输入(这应该是您的模糊但不是扩张的图像):

enter image description here

// this functions finds the biggest X contours. Probably there are faster ways, but it should work...
std::vector<std::vector<cv::Point>> findBiggestContours(std::vector<std::vector<cv::Point>> contours, int amount)
{
    std::vector<std::vector<cv::Point>> sortedContours;

    if(amount <= 0) amount = contours.size();
    if(amount > contours.size()) amount = contours.size();

    for(int chosen = 0; chosen < amount; )
    {
        double biggestContourArea = 0;
        int biggestContourID = -1;
        for(unsigned int i=0; i<contours.size() && contours.size(); ++i)
        {
            double tmpArea = cv::contourArea(contours[i]);
            if(tmpArea > biggestContourArea)
            {
                biggestContourArea = tmpArea;
                biggestContourID = i;
            }
        }

        if(biggestContourID >= 0)
        {
            //std::cout << "found area: " << biggestContourArea << std::endl;
            // found biggest contour
            // add contour to sorted contours vector:
            sortedContours.push_back(contours[biggestContourID]);
            chosen++;
            // remove biggest contour from original vector:
            contours[biggestContourID] = contours.back();
            contours.pop_back();
        }
        else
        {
            // should never happen except for broken contours with size 0?!?
            return sortedContours;
        }

    }

    return sortedContours;
}

int main()
{
    cv::Mat input = cv::imread("../Data/glass2.png", CV_LOAD_IMAGE_GRAYSCALE);
    cv::Mat inputColors = cv::imread("../Data/glass2.png"); // used for displaying later
    cv::imshow("input", input);

    //edge detection
    int lowThreshold = 100;
    int ratio = 3;
    int kernel_size = 3;    

    cv::Mat canny;
    cv::Canny(input, canny, lowThreshold, lowThreshold*ratio, kernel_size);
    cv::imshow("canny", canny);

    // close gaps with "close operator"
    cv::Mat mask = canny.clone();
    cv::dilate(mask,mask,cv::Mat());
    cv::dilate(mask,mask,cv::Mat());
    cv::dilate(mask,mask,cv::Mat());
    cv::erode(mask,mask,cv::Mat());
    cv::erode(mask,mask,cv::Mat());
    cv::erode(mask,mask,cv::Mat());

    cv::imshow("closed mask",mask);

    // extract outermost contour
    std::vector<cv::Vec4i> hierarchy;
    std::vector<std::vector<cv::Point>> contours;
    //cv::findContours(mask, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
    cv::findContours(mask, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);


    // find biggest contour which should be the outer contour of the frame
    std::vector<std::vector<cv::Point>> biggestContour;
    biggestContour = findBiggestContours(contours,1); // find the one biggest contour
    if(biggestContour.size() < 1)
    {
        std::cout << "Error: no outer frame of glasses found" << std::endl;
        return 1;
    }

    // draw contour on an empty image
    cv::Mat outerFrame = cv::Mat::zeros(mask.rows, mask.cols, CV_8UC1);
    cv::drawContours(outerFrame,biggestContour,0,cv::Scalar(255),-1);
    cv::imshow("outer frame border", outerFrame);

    // now find the glasses which should be the outer contours within the frame. therefore erode the outer border ;)
    cv::Mat glassesMask = outerFrame.clone();
    cv::erode(glassesMask,glassesMask, cv::Mat());
    cv::imshow("eroded outer",glassesMask);

    // after erosion if we dilate, it's an Open-Operator which can be used to clean the image.
    cv::Mat cleanedOuter;
    cv::dilate(glassesMask,cleanedOuter, cv::Mat());
    cv::imshow("cleaned outer",cleanedOuter);


    // use the outer frame mask as a mask for copying canny edges. The result should be the inner edges inside the frame only
    cv::Mat glassesInner;
    canny.copyTo(glassesInner, glassesMask);

    // there is small gap in the contour which unfortunately cant be closed with a closing operator...
    cv::dilate(glassesInner, glassesInner, cv::Mat());
    //cv::erode(glassesInner, glassesInner, cv::Mat());
    // this part was cheated... in fact we would like to erode directly after dilation to not modify the thickness but just close small gaps.
    cv::imshow("innerCanny", glassesInner);


    // extract contours from within the frame
    std::vector<cv::Vec4i> hierarchyInner;
    std::vector<std::vector<cv::Point>> contoursInner;
    //cv::findContours(glassesInner, contoursInner, hierarchyInner, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
    cv::findContours(glassesInner, contoursInner, hierarchyInner, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);

    // find the two biggest contours which should be the glasses within the frame
    std::vector<std::vector<cv::Point>> biggestInnerContours;
    biggestInnerContours = findBiggestContours(contoursInner,2); // find the one biggest contour
    if(biggestInnerContours.size() < 1)
    {
        std::cout << "Error: no inner frames of glasses found" << std::endl;
        return 1;
    }

    // draw the 2 biggest contours which should be the inner glasses
    cv::Mat innerGlasses = cv::Mat::zeros(mask.rows, mask.cols, CV_8UC1);
    for(unsigned int i=0; i<biggestInnerContours.size(); ++i)
        cv::drawContours(innerGlasses,biggestInnerContours,i,cv::Scalar(255),-1);

    cv::imshow("inner frame border", innerGlasses);

    // since we dilated earlier and didnt erode quite afterwards, we have to erode here... this is a bit of cheating :-(
    cv::erode(innerGlasses,innerGlasses,cv::Mat() );

    // remove the inner glasses from the frame mask
    cv::Mat fullGlassesMask = cleanedOuter - innerGlasses;
    cv::imshow("complete glasses mask", fullGlassesMask);

    // color code the result to get an impression of segmentation quality
    cv::Mat outputColors1 = inputColors.clone();
    cv::Mat outputColors2 = inputColors.clone();
    for(int y=0; y<fullGlassesMask.rows; ++y)
        for(int x=0; x<fullGlassesMask.cols; ++x)
        {
            if(!fullGlassesMask.at<unsigned char>(y,x))
                outputColors1.at<cv::Vec3b>(y,x)[1] = 255;
            else
                outputColors2.at<cv::Vec3b>(y,x)[1] = 255;

        }

    cv::imshow("output", outputColors1);

    /*
    cv::imwrite("../Data/Output/face_colored.png", outputColors1);
    cv::imwrite("../Data/Output/glasses_colored.png", outputColors2);
    cv::imwrite("../Data/Output/glasses_fullMask.png", fullGlassesMask);
    */

    cv::waitKey(-1);
    return 0;
}

我得到了这个分割结果:

enter image description here

原始图像中的叠加会给您质量的印象:

enter image description here

和逆:

enter image description here

代码中有一些棘手的部分,还没有整理好。我希望这是可以理解的。

下一步是计算分段框架的厚度。我的建议是计算反转掩模的距离变换。由此,您将需要计算脊检测或骨架化掩模以找到脊。之后使用山脊距离的中值。

无论如何,我希望这篇文章可以帮助你一点,尽管它还不是解决方案。


1
投票

根据照明、框架颜色等,这可能会也可能不会起作用,但是简单的颜色检测来分离框架怎么样?框架颜色通常比人类皮肤深很多。您最终会得到一个二值图像(只有黑色和白色),并通过计算黑色像素的数量(面积),您可以获得帧的面积。

另一种可能的方法是通过调整/扩张/腐蚀/两者来获得更好的边缘检测,直到获得更好的轮廓。您还需要区分镜片的轮廓,然后应用 cvContourArea。


0
投票

最近发现一款带侧护罩的安全眼镜相当不错,它有以下几个优点: 侧护罩为眼睛两侧提供额外的覆盖和保护,减少碎片、灰尘或异物从两侧进入的风险。 在许多工作场所,必须佩戴带有侧护罩的安全眼镜,以符合安全法规和标准。它们对于确保潜在危险环境中工人的安全至关重要。 带侧防护罩的安全眼镜可用于各种环境,包括建筑工地、实验室、制造设施和车间。它们提供适合不同工作环境的多功能眼睛保护。 侧护罩有助于防止气流干燥或刺激眼睛。这对于在大风或多尘环境中工作的人尤其有利。 在需要担心化学品或液体飞溅的环境中,侧防护罩可防止这些物质进入眼睛。 带侧护罩的安全眼镜有多种款式、镜片类型和镜框设计。这使得个人可以选择舒适且适合其特定需求的安全眼镜。 总之,带有侧护罩的安全眼镜是个人防护装备 (PPE) 的重要组成部分,对于维护各种行业和活动中的眼睛安全至关重要。 enter image description here

© www.soinside.com 2019 - 2024. All rights reserved.