[LEADERG AI ZOO] Jupyter-Image-Human-Pose-PyTorch

Language:

繁體中文

 

[Introduction]

 

Applied to human body posture detection, it can detect the position of people's eyes, nose, ears, neck, shoulders, elbows, wrists, hip joints, knee joints, and ankles.

 

 

[Instruction]

 

The solution process is:

 

Prepare dataset -> Generate files needed for training -> Training -> Use various methods to infer

The dataset used is the dataset of coco2017.

 

1. 1_prepare_train_labels.ipynb

After execution, the pkl file required for training will be generated.

parameter:

--labels: dataset object keypoints format json label file.

--output-name: output the pkl file for training.

supplement:

If you want to use your own dataset, please note that the label file must be a json file in object keypoints format. The name of the label  file in the object keypoints format in the train/annotations folder must be "person_keypoints_train.json", and the name of the label  file in the object keypoints format in the val/annotations folder must be "val.json".

 

2. 2_train.ipynb

Training the coco2017 dataset.

parameter:

--train-images-folder: the folder location of training images.

--prepared-train-labels: the pkl label file for training, which is generated after running 1_prepare_train_labels.ipynb.

--val-labels: the object keypoints format json label file of the test image

--val-images-folder: the folder location of test images.

--checkpoint-path: pretrained model file location

 

3. 3_inference.ipynb

Infer an image and mark the position of the human body and the position of the joints.

parameter:

--images-folder: infer the location of the images.

--checkpoint-path: training model location.

 

human pose.png

 

4. 4_inference_folder.ipynb

Infer all the images in the folder, mark the position of the human body and the position of the joints.

parameter:

--images-folder: infer the location of the image folder.

--checkpoint-path: training model location.

 

5. 5_inference_webcam.ipynb

Infer the webcam image and mark the position of the human body and the position of the joints.

parameter:

--checkpoint-path: training model location.

--video: the specified webcam device.

 


Recommended Article

1.
LEADERG AOI VISION - AI Image Analysis Software (No Code, Windows User Interface, Deep Learning, Machine Learning)

2.
LEADERG AI ZOO - AI Sample Code Software (Save 90% of Development Time, Low Code, Jupyter Lab User Interface, Deep Learning, Machine Learning, Big Data)

3.
LEADERG AI Computer (Deep Learning, PC, workstation, server, NVIDIA GPU, TESLA A100, RTX-3090-24G, Intel VPU, GPU Server)

4.
LEADERG AI Education

Thanks for our customers

Taiwan University, Tsing Hua University, Yang Ming Chiao Tung University, Cheng Kung University, Taipei Medical University, Taipei University of Nursing and Health Sciences, National Chung Hsing University, Chi Nan University, Ilan University, United University, Defence University, Military Academy, Naval Academy, Feng Chia University, Chang Gung University, I-Shou University, Shih Chien University, Taiwan University of Science and Technology, Taichung University of Science and Technology, Yunlin University of Science and Technology, Chin-Yi University of Science, Formosa University, Pintung University of Science and Technology, Kaohsiung University of Science and Technology, Chaoyang University of Technology, Ming Chi University of Technology, Southern Taiwan University of Science and Technology, China University of Technology, Gushan Senior High School, Taipei Veterans General Hospital, Chang Gung Medical Foundation, Tzu Chi Medical Foundation, E-Da Hospital, Industry Technology Research Institute, Institute for Information Industry, Chung-Shan Institute of Science and Technology, Armaments Bureau, Ministry of Justice Investigation Bureau, Institute of Nuclear Energy Research, Endemic Species Research Institute, Institute of Labor, Occupational Safety And Health, Metal Industries Research & Development Centre, Taiwan Instrument Research Institute, Automotive Research & Testing Center, Taiwan Water Corporation, Taiwan Semiconductor Manufacturing Co., Ltd., United Microelectronics Corp., Nanya Technology, Winbond Electronics Corp., Xintec Inc., Arima Lasers Corporation, AU Optronics Corporation, Innolux Corporation, HannStar Display Corporation, Formosa Plastics Group., Formosa Technologies Corporation, Nan Ya Plastics Corp., Formosa Chemicals & Fibre Corporation, Chinese Petroleum Corporation, Logitech, ELAN Microelectronics Corp., Lextar Electronics Corporation, Darfon Electronics Corp., WPG Holdings, Mirle Automation Corporation, Symtek Automation Asia Co., Ltd, ChipMOS Technologies Inc., Dynapack International Technology Corporation, Primax Electronics Ltd., Feng Hsin Steel, China Ecotek, Grade Upon Technology Corp., AAEON Technology Inc., Stark Technology, Inc., Horng Terng Automation Co., Ltd., Zhen Ding Technology Holding Ltd, Boardtek Electronics Corporation, MiTAC International Corporation, Allion Labs, Inc., Sound Land Corp., Hong Hu Tech, etc.