Human body posture detection, detecting the position of human nose, eyes, ears, neck, shoulders, elbows, wrists, hip joints, knee joints, and ankles.
[Operation steps and instructions]
1. Prepare the dataset
The dataset used by the APP is the dataset of coco2017. Select the dataset to be used in Select Dataset.
If you want to use your own dataset, please make a copy of coco2017 and place it in the same location as coco2017 (press the location opened in View), and replace the file by yourself.
Note: that the marked file must be a json file in object keypoints format and the name of the label file in the train/annotations folder must be "person_keypoints_train.json", and the name of the label file in the val/annotations folder must be "val.json".
2. Prepare data
Press 1. Prepare train labels
Generate the train_annotation.pkl file needed for training.
Press 2. train (GPU) to start training.
If you need to choose a different pretrained model or set a different batch size, you can fill in the Parameter field on the right.
The trained model is placed in the model folder.
There are three types of inferences, inference leaflets, inference to a whole folder, and inference to webcam.
If you need to select the model for inference, please select or enter the file name in the Parameter inference model Path area.
Press 3. inference (GPU) to select the image to be inferred.
(2) inference folder
Press 4. inference folder (GPU) to select the folder to be inferred.
(3) webcam inference
Press 5. inference webcam (GPU) to turn on the webcam for inference.
The webcam id can be set to use which webcam.
Trial of LEADERG APPWelcome to contact us for 15 days trial of LEADERG APP.
How to BuyWelcome to contact us for quotation. We will help you buy right products.