FaceNet directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. The model does not use the traditional softmax method for classification learning, but extracts a layer as a feature, maps the face image to a coding method of Euclidean multidimensional space, and expresses the similarity of the face through the spatial distance. The smaller the distance between the same person and the greater the distance between different people. Finally, based on this code, various applications such as face recognition, face verification, and face grouping are performed. Its main core is to propose an architecture for a convolutional neural network and use triplet loss as the loss function.
[Operation steps and instructions] Open the FaceNet APP. APP is mainly divided into four major functions, data preparation and pre-processing, image clustering, and image classification.
Data preparation :
1. Choose the data set for AI learning. If you want to train your own images, please click browse to open the file location. It is recommended to copy a default data set folder and change it to your own data name.
2. Image preprocessing.
- Input image.
Click browse on Input image to open the folder and place all the image folders. For example, if there are five people's faces A, B, C, D, E, there are 5 picture folders, and each folder has only one picture of the same person. Note: The image file and its attached file are recommended to be .jpg or .png or .jpeg.
Click Preprocess to crop the face area and zoom to 160 pixels * 160 pixels.
- Output image.
Click browse for Output image to open the folder, you can view the processed image. The above actions need to be repeated twice, one is the image of Train training and the other is the image of Test.
After the above images are processed, there are two main applications. One is clustering, which automatically groups face images, and each group is a similar face. One is classification, which requires training a classifier, and then the face can be recognized through this classifier.
1. Input image. Click Browse of Input image to open the folder, and place the output image that has been cropped in the previous step Preprocess. Note: The image file and its attached file are recommended to be .jpg or .png or .jpeg.
2. Click Clustering.
3.After grouping, please click browse in the Clustering result to open the folder, you can see 1, 2, 3... several folders, each folder contains the same type of face.
1. training. Press Train to train a classifier.
Input model is a model that FaceNet extracts facial features.
Input image is the previously processed 160*160 pixel face area.
Output pkl is the output file of the training classifier.
Inference : Click -> select a face -> press the Inference button -> pop up the result. Inference folder : Click -> select the folder where the face is placed -> press the Inference button -> pop up the result.
Inference_api、Inference_api_browser : Click Inference_api to open the face classification server -> click inference_api_browse to open the browser -> select the picture, and press Submit -> to get the result.
Inference_webcam : You can set the ID of the IP camera and threshold, and press Run to perform face classification through the camera.