<> Face feature extraction

This paper mainly uses dlib Feature library in face recognition .

dlib Library usage 68 Feature points are used to mark face features , Through the feature points of the corresponding sequence , Get the corresponding facial features . The following figure shows 68 Feature points . For example, we need to mention

Take eye features , obtain 37 reach 46 These feature points are enough .

Add a similar mapping to the code , Directly by calling the corresponding part .
Python exchange of learning Q group :906715085##3 FACIAL_LANDMARKS_68_IDXS = OrderedDict([ ("mouth", (
48, 68)), ("right_eyebrow", (17, 22)), ("left_eyebrow", (22, 27)), ("right_eye",
(36, 42)), ("left_eye", (42, 48)), ("nose", (27, 36)), ("jaw", (0, 17))])
FACIAL_LANDMARKS_5_IDXS= OrderedDict([ ("right_eye", (2, 3)), ("left_eye", (0, 1
)), ("nose", (4))
<> Data preprocessing and model loading
We deform the image according to the requirements of the input image , Here you need to convert it into a gray image , load get_frontal_face_detector Model and feature library are tested .
Python exchange of learning Q group :906715085### # Loading face detection and key point location detector = dlib.
get_frontal_face_detector() predictor = dlib.shape_predictor(args[
"shape_predictor"]) # Read input data , Pretreatment image = cv2.imread(args["image"]) (h, w) = image
.shape[:2] width=500 r = width / float(w) dim = (width, int(h * r)) image = cv2.
resize(image, dim, interpolation=cv2.INTER_AREA) gray = cv2.cvtColor(image, cv2.
COLOR_BGR2GRAY) # Face detection rects = detector(gray, 1)
<> Traverse each face key

Predict the feature points of the extracted face , Locate the key parts of the face , And turn it into np_array Form of .
shape = predictor(gray, rect) shape = shape_to_np(shape)
Traverse each part , Copy a copy for operation , Identify the currently detected category on the image .
# Traverse each part for (name, (i, j)) in FACIAL_LANDMARKS_68_IDXS.items(): clone = image.
copy() cv2.putText(clone, name, (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0,
255), 2)
According to the identified position , Draw feature points on the image .
for (x, y) in shape[i:j]: cv2.circle(clone, (x, y), 3, (0, 0, 255), -1)
Extract the facial features .
(x, y, w, h) = cv2.boundingRect(np.array([shape[i:j]])) roi = image[y:y + h, x:
x+ w] (h, w) = roi.shape[:2] width=250 r = width / float(w) dim = (width, int(h
* r)) roi = cv2.resize(roi, dim, interpolation=cv2.INTER_AREA)
Just show it at last .
cv2.imshow("ROI", roi) cv2.imshow("Image", clone) cv2.waitKey(0)
Final effect

Original drawing

face detection

All facial features detection

Detection of key parts

Technology