人形检测模型部署
参考资料:
1.获取原始模型
1.进入人形检测目录:
cd ~/Projects/rknn_model_zoo/examples/yolov8_pose/model
2.获取预训练模型
chmod +x download_model.sh
./download_model.sh
运行效果如下:
(base) baiwen@dshanpi-a1:~/Projects/rknn_model_zoo/examples/yolov8_pose/model$ ./download_model.sh
--2025-08-19 14:28:38-- https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/yolov8_pose/yolov8n-pose.onnx
Resolving ftrg.zbox.filez.com (ftrg.zbox.filez.com)... 180.184.171.46
Connecting to ftrg.zbox.filez.com (ftrg.zbox.filez.com)|180.184.171.46|:443... connected.
HTTP request sent, awaiting response... 200
Length: 13326816 (13M) [application/octet-stream]
Saving to: ‘./yolov8n-pose.onnx’
./yolov8n-pose.onnx 100%[==============================================>] 12.71M 588KB/s in 18s
2025-08-19 14:28:56 (731 KB/s) - ‘./yolov8n-pose.onnx’ saved [13326816/13326816]
2.模型转换
1.使用Conda激活rknn-toolkit2
环境
conda activate rknn-toolkit2
2.进入yolov8_pose模型转换目录
cd ~/Projects/rknn_model_zoo/examples/yolov8_pose/python
3.执行模型转换
python3 convert.py ../model/yolov8n-pose.onnx rk3576
运行效果如下:
(rknn-toolkit2) baiwen@dshanpi-a1:~/Projects/rknn_model_zoo/examples/yolov8_pose/python$ python3 convert.py ../model/yolov8n-pose.onnx rk3576
I rknn-toolkit2 version: 2.3.2
--> Config model
done
--> Loading model
I Loading : 100%|█████████ ██████████████████████████████████████| 167/167 [00:00<00:00, 8309.20it/s]
done
--> Building model
I OpFusing 0: 100%|██████████████████████████████████████████████| 100/100 [00:00<00:00, 108.73it/s]
I OpFusing 1 : 100%|██████████████████████████████████████████████| 100/100 [00:01<00:00, 52.42it/s]
I OpFusing 0 : 100%|██████████████████████████████████████████████| 100/100 [00:03<00:00, 26.72it/s]
I OpFusing 1 : 100%|██████████████████████████████████████████████| 100/100 [00:04<00:00, 23.30it/s]
I OpFusing 0 : 100%|██████████████████████████████████████████████| 100/100 [00:04<00:00, 21.51it/s]
I OpFusing 1 : 100%|██████████████████████████████████████████████| 100/100 [00:04<00:00, 20.61it/s]
I OpFusing 2 : 100%|██████████████████████████████████████████████| 100/100 [00:07<00:00, 14.11it/s]
I GraphPreparing : 100%|█████████████████████████████████████████| 202/202 [00:00<00:00, 820.24it/s]
I Quantizating : 100%|████████████████████████████████████████████| 202/202 [01:20<00:00, 2.52it/s]
W hybrid_quantization_step2: The node that pointed by '/model.22/Slice_3_output_0' is specaifed repeatedly!
I OpFusing 0: 100%|█████████████████████████████████████████████| 100/100 [00:00<00:00, 5195.79it/s]
I OpFusing 1 : 100%|████████████████████████████████████████████| 100/100 [00:00<00:00, 3367.67it/s]
I OpFusing 2 : 100%|████████████████████████████████████████████| 100/100 [00:00<00:00, 2700.01it/s]
W hybrid_quantization_step2: The default input dtype of 'images' is changed from 'float32' to 'int8' in rknn model for performance!
Please take care of this change when deploy rknn model with Runtime API!
W hybrid_quantization_step2: The default output dtype of 'output0' is changed from 'float32' to 'int8' in rknn model for performance!
Please take care of this change when deploy rknn model with Runtime API!
I rknn building ...
I rknn building done.
done
--> Export rknn model
output_path: ../model/yolov8_pose.rknn
done
可以看到转换完成后在model目录下看到端侧的RKNN模型。
3.模型推理
执行推理测试代码:
python3 yolov8_pose.py --model_path ../model/yolov8_pose.rknn --target rk3576
运行效果如下:
(rknn-toolkit2) baiwen@dshanpi-a1:~/Projects/rknn_model_zoo/examples/yolov8_pose/python$ python3 yolov8_pose.py --model_path ../model/yolov8_pose.rknn --target rk3576
I rknn-toolkit2 version: 2.3.2
done
--> Init runtime environment
I target set by user is: rk3576
done
--> Running model
W inference: The 'data_format' is not set, and its default value is 'nhwc'!
save image in ./result.jpg
运行完成后可以在当前目录下生成result.jpg
结果图像。