[MYAI Studio SDK] Image-FaceNet-Jupyter

FaceNet can be applied to face grouping and face classification, and judge the similarity of faces through Euclidean distance, and then achieve face recognition.

 

[Instruction]

 

The solution process is:

Prepare images-> Train -> Face image inference.

Prepare images -> Face image grouping.

Prepare images -> Face image comparison.

 

1. 1_preprocess.ipynb

Extract face images from the images in data/image and data/image_test, scale them to 160x160 pixel, and save them in the data/image-160 and data/image_test folders.

Remarks:

The image of data/image is for training; the image of data/image_test is for testing.

The face image extracted by data/image_test will be additionally stored in data/clustering_image for use by 3_clustering.ipynb.

The images of each folder in data/image and data/image_test must be images of the same person.

The image file format must be .jpg or .png or .jpeg.

 

2. 2_classifier.ipynb

Training the face image classifier will be used in 5~8.

 

3. 3_clustering.ipynb

Prepare images -> Face image grouping.

Divide the images prepared at point 1 and the similar faces into one category, and save the results in the data/clustering_result folder.

 

4. 4_compare.ipynb

Prepare images -> Face image comparison.

Input six images, compare yourself with the other five images, and output the result matrix. The smaller the number, the more likely the face image is the same person.

 

5. 5_inference.ipynb

Input an image, classify it according to the classification model trained by 2_classifier.ipynb, and infer who the person in the image is.

Parameter Description:

input_image is the image path to be inferred.

input_pkl is the path of the classification model trained by 2_classifier.ipynb. It is not recommended to change the file location and name.

 

6. 6_inference_folder.ipynb

Enter the location of the image folder to be inferred, classify according to the classification model trained by 2_classifier.ipynb, and infer the person in the image in the folder.

Parameter Description:

input_image is the path of the image folder to be inferred.

input_pkl is the path of the classification model trained by 2_classifier.ipynb. It is not recommended to change the file location and name.

 

7. 7_inference_webcam.ipynb

Turn on the webcam and infer who the person photographed by the webcam is.

Parameter Description:

--deviceNumber 0 :0 refers to the webcam number used. If the user has more than one webcam, he can set it by himself.

 

8. 8_inference_api.ipynb

Use the webpage to select an image and infer who the person in the image is.

Parameter Description:

--port 8801: 8801 is the port occupied by the FaceNet webpage.

After running, if the port is not changed, you can use 9_inference_api_browser.ipynb to open the webpage.

 

FaceNet.png

 

This SDK is built in AppForAI - AI Dev Tools.

 

Purchase license separately: USD 600, permanent authorization, single APP authorization, single machine authorization, one-year activation, one-year download, one-year update, one-year email technical support.

Contact Us and How to Buy


Welcome to contact us. Please refer to the following link:

https://www.myai168.com/article/index?sn=11059

Recommended Article

1.
MYAI Studio for Windows

2.
MYAI Studio for Linux

3.
Featured AI Computers

Thanks for our customers

Taiwan University, Tsing Hua University, Yang Ming Chiao Tung University, Cheng Kung University, Taipei Medical University, Taipei University of Nursing and Health Sciences, National Chung Hsing University, Chi Nan University, Ilan University, United University, Defence University, Military Academy, Naval Academy, Feng Chia University, Chang Gung University, I-Shou University, Shih Chien University, Taiwan University of Science and Technology, Taichung University of Science and Technology, Yunlin University of Science and Technology, Chin-Yi University of Science, Formosa University, Pintung University of Science and Technology, Kaohsiung University of Science and Technology, Chaoyang University of Technology, Ming Chi University of Technology, Southern Taiwan University of Science and Technology, China University of Technology, Gushan Senior High School, Taipei Veterans General Hospital, Chang Gung Medical Foundation, Tzu Chi Medical Foundation, E-Da Hospital, Industry Technology Research Institute, Institute for Information Industry, Chung-Shan Institute of Science and Technology, Armaments Bureau, Ministry of Justice Investigation Bureau, Institute of Nuclear Energy Research, Endemic Species Research Institute, Institute of Labor, Occupational Safety And Health, Metal Industries Research & Development Centre, Taiwan Instrument Research Institute, Automotive Research & Testing Center, Taiwan Water Corporation, Taiwan Semiconductor Manufacturing Co., Ltd., United Microelectronics Corp., Nanya Technology, Winbond Electronics Corp., Xintec Inc., Arima Lasers Corporation, AU Optronics Corporation, Innolux Corporation, HannStar Display Corporation, Formosa Plastics Group., Formosa Technologies Corporation, Nan Ya Plastics Corp., Formosa Chemicals & Fibre Corporation, Chinese Petroleum Corporation, Logitech, ELAN Microelectronics Corp., Lextar Electronics Corporation, Darfon Electronics Corp., WPG Holdings, Mirle Automation Corporation, Symtek Automation Asia Co., Ltd, ChipMOS Technologies Inc., Dynapack International Technology Corporation, Primax Electronics Ltd., Feng Hsin Steel, China Ecotek, Grade Upon Technology Corp., AAEON Technology Inc., Stark Technology, Inc., Horng Terng Automation Co., Ltd., Zhen Ding Technology Holding Ltd, Boardtek Electronics Corporation, MiTAC International Corporation, Allion Labs, Inc., Sound Land Corp., Hong Hu Tech, etc.