pesi
/

Luigi commited on
Commit
bd45a81
1 Parent(s): 0a36dd1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -5
README.md CHANGED
@@ -4,7 +4,7 @@ pipeline_tag: object-detection
4
  tags:
5
  - Pose Estimation
6
  ---
7
- RTMO / YOLO-NAS-Pose Inference with CUDAExecutionProvider / TensorrtExecutionProvider DEMO
8
 
9
  - `demo.sh`: DEMO main program, which will first install rtmlib, and then use rtmo-s to analyze the .mp4 files in the video folder.
10
  - `demo_batch.sh`: Multi-batch version of demo.sh
@@ -16,13 +16,15 @@ RTMO / YOLO-NAS-Pose Inference with CUDAExecutionProvider / TensorrtExecutionPro
16
  - `rtmo_demo_batch.py`: Multi-batch version of demo_batch.sh
17
  - `video`: Contains one test video.
18
 
19
- Original ONNX models come from [](https://github.com/open-mmlab/mmpose/tree/main/projects/rtmo) trained on body7. We did only
 
 
 
 
20
 
21
  We did the following to make them work with TensorRTExecutionProvdier
22
 
23
  1. Shape inference
24
  2. batch size 1,2,4 fixation
25
 
26
- Note: TensorrtExecutionProvider only supports Models with fixed batch size (*_batchN.onnx) while CUDAExecutionProvider can run with dynamic batch size.
27
-
28
- FP16 ONNX model is also provided.
 
4
  tags:
5
  - Pose Estimation
6
  ---
7
+ ## RTMO / YOLO-NAS-Pose Inference with CUDAExecutionProvider / TensorrtExecutionProvider DEMO
8
 
9
  - `demo.sh`: DEMO main program, which will first install rtmlib, and then use rtmo-s to analyze the .mp4 files in the video folder.
10
  - `demo_batch.sh`: Multi-batch version of demo.sh
 
16
  - `rtmo_demo_batch.py`: Multi-batch version of demo_batch.sh
17
  - `video`: Contains one test video.
18
 
19
+ # Note
20
+
21
+ * Original ONNX models come from [MMPOSE/RTMO Project Page](https://github.com/open-mmlab/mmpose/tree/main/projects/rtmo) trained on body7. We did only
22
+ * DEMO Inferecne Code is modified from [rtmlib](https://github.com/Tau-J/rtmlib)
23
+ * TensorrtExecutionProvider only supports Models with fixed batch size (*_batchN.onnx) while CUDAExecutionProvider can run with dynamic batch size.
24
 
25
  We did the following to make them work with TensorRTExecutionProvdier
26
 
27
  1. Shape inference
28
  2. batch size 1,2,4 fixation
29
 
30
+ PS. FP16 ONNX model is also provided.