如何手动设置关键点和提取特征

问题描述 投票:0回答:1

我正在使用ORB来检测一组图像上的关键点,如本例所示:

enter image description here

我想做的是:我想在图像的特定坐标上手动设置22个点,并将从这些点提取的特征存储到特征向量中。例如:

enter image description here

之后,将这些特征分别存储到第22维向量中。

我当前用于加载图像并设置关键点的代码是:

import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
from sklearn.multiclass import OneVsRestClassifier
from sklearn.metrics import confusion_matrix 
from sklearn import svm, metrics, datasets
from sklearn.utils import Bunch
from sklearn.model_selection import GridSearchCV, train_test_split
from sklearn.svm import SVC
from sklearn.ensemble import ExtraTreesClassifier
from skimage.io import imread
from skimage.transform import resize
import skimage
import cv2

DATADIR = "C:/Dataset"
CATEGORIES = ["class 1", "class 2", "class 3", "class 4", "class 5"]



def load_image_files(fullpath, dimension=(60, 60)):
    descr = "A image classification dataset"
    flat_data = []
    target = []
    images = []
    dimension=(64, 64)
    for category in CATEGORIES:
        path = os.path.join(DATADIR, category)
        for person in os.listdir(path):
            personfolder = os.path.join(path, person)
            for imgname in os.listdir(personfolder):
                class_num = CATEGORIES.index(category)
                fullpath = os.path.join(personfolder, imgname)
                imageList = skimage.io.imread(fullpath)
                orb = cv2.ORB_create(22)
                kp, des = orb.detectAndCompute(imageList, None)
                drawnImages = cv2.drawKeypoints(imageList, kp, None, flags= cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
                cv2.imshow("Image", drawnImages)
                cv2.waitKey(0)
                img_resized = resize(skimage.io.imread(fullpath), dimension, anti_aliasing=True, mode='reflect')
                flat_data.append(img_resized.flatten())
                images.append(flat_data)
                target.append(class_num)


    flat_data = np.array(flat_data)
    pd.DataFrame(flat_data).to_csv("Rostos1.csv")
    target = np.array(target)
    images = np.array(images)
    return Bunch(data=flat_data,
                     target=target,
                     target_names=CATEGORIES,
                     images=images,
                     DESCR=descr)

这些是我希望从中提取特征的坐标

p1 = 60, 10
p2 = 110, 10
p3 = 170, 10
p4 = 25, 60
p5 = 60, 40
p6 = 110, 35
p7 = 170, 35
p8 = 190, 60
p9 = 30, 95
p10 = 60, 80
p11 = 100, 105
p12 = 120, 105
p13 = 160, 180
p14 = 185, 95
p15 = 25, 160
p16 = 55, 160
p17 = 155, 160
p18 = 185, 160
p19 = 65, 200
p20 = 83, 186
p21 = 128, 186
p22 = 157, 197
python opencv computer-vision feature-extraction
1个回答
0
投票

使用orb API中的计算方法。某种标准是

kp = orb.detect(img,None)
kp, des = orb.compute(img, kp)

但是对于您的情况,关键点来自用户输入,因此请使用类似的内容>>

input_kp = # comes from user
orb.compute(img, input_kp)

确保输入关键点与计算方法期望的格式匹配。

© www.soinside.com 2019 - 2024. All rights reserved.