使用“截断”大型68点模型进行人脸编码

问题描述 投票:0回答:1

人脸识别版本:1.3.0 Python版本:3.92 操作系统:Windows 10 64

描述 我曾尝试解决“蒙面”人脸识别和身份识别问题。一个想法是使用截断的面部标志集仅比较面部的某些部分(眼睛、眉毛、鼻子的一部分)。因此,我定义了一个新的 dlib 类,它返回上述面部部分的“原始”标志并且( 0. 0) – 对于大模型的其余 68 个点。我还定义了一个新方法face_encodings_masked()来使用新类:

我做了什么

import face_recognition
from face_recognition import api as frapi
import numpy as np
import dlib



class Full_object_detection_masked(dlib.full_object_detection):
    def part(self, idx:int):
        if idx in range(2, 15) or idx in range(48, 68):
            return (0, 0)
        return super().part(idx)

    def parts(self):
        lst = dlib.points()
        for idx in range(0, 2):
            old_x = super().part(idx).x
            old_y = super().part(idx).y
            lst.insert(idx, dlib.point(old_x, old_y))
        for idx in range(2, 15):
            lst.insert(idx, dlib.point(0, 0))
        for idx in range(15, 29):
            old_x = super().part(idx).x
            old_y = super().part(idx).y
            lst.insert(idx, dlib.point(old_x, old_y))
        for idx in range(29, 36):
            lst.insert(idx, dlib.point(0, 0))
        for idx in range(36, 48):
            old_x = super().part(idx).x
            old_y = super().part(idx).y
            lst.insert(idx, dlib.point(old_x, old_y))    
        for idx in range(48, 68):
            lst.insert(idx, dlib.point(0, 0))
        return lst
    
    
def face_encodings_masked(face_image, known_face_locations=None, num_jitters=1, model="large"):
    """
    Given an image, return the 128-dimension face encoding for each face in the image.

    :param face_image: The image that contains one or more faces
    :param known_face_locations: Optional - the bounding boxes of each face if you already know them.
    :param num_jitters: How many times to re-sample the face when calculating encoding. Higher is more accurate, but slower (i.e. 100 is 100x slower)
    :param model: Optional - which model to use. "large" (default) or "small" which only returns 5 points but is faster.
    :return: A list of 128-dimensional face encodings (one for each face in the image)
    """
    raw_landmarks = frapi._raw_face_landmarks(face_image,
                                        known_face_locations,
                                        model)
    masked_raw_landmarks = []
    for lm in raw_landmarks:
        masked_raw_landmarks.append(Full_object_detection_masked(lm.rect, lm.parts()))
    return [np.array(
        frapi.face_encoder.compute_face_descriptor(
            face_image,
            raw_landmark_set,
            num_jitters)) for raw_landmark_set in masked_raw_landmarks]


image = face_recognition.load_image_file("1.jpg")
"""calling modified face_encodings with truncated face landmarks.."""
enc_masked = face_encodings_masked(image, known_face_locations=None, num_jitters=1, model="large")
"""calling standard face_encodings with normal face landmarks.."""
enc_full = frapi.face_encodings(image, known_face_locations=None, num_jitters=1, model="large")
print("Difference between face encodings with partial (masked) and full landmarks:")
print(enc_full[0] - enc_masked[0])

我确实希望看到差异..但没有差异:

部分(屏蔽)和完整地标的人脸编码之间的差异:

[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0.]

我尝试探索face_recognition和dlib模块的源代码,但发现每种适当的方法都需要面部标志作为参数......所以,如果面部标志不同 - 面部编码应该不同,对吗?但事实并非如此。有什么想法吗?

更新(03-2024)。几年前,我从face_recognition / dlib转到insightface库。最后一个的工作原理非常不同,它生成 512d 人脸向量,而不是 dlib 生成的 128d 等。

python face-recognition dlib
1个回答
1
投票

我决定采取另一种方式,在基本脸部上使用虚拟遮罩(5 种不同类型)应用程序,然后 - 也为“遮罩”脸部存储附加编码。 .. 嗯,它有效。不是 100% 有效,但是..确实如此。 它是如何工作的 - 在 Github

© www.soinside.com 2019 - 2024. All rights reserved.