Update README.md
Browse files
README.md
CHANGED
@@ -4,11 +4,25 @@ pipeline_tag: object-detection
|
|
4 |
tags:
|
5 |
- Pose Estimation
|
6 |
---
|
7 |
-
RTMO
|
8 |
|
9 |
- `demo.sh`: DEMO main program, which will first install rtmlib, and then use rtmo-s to analyze the .mp4 files in the video folder.
|
10 |
-
- `
|
11 |
-
- `
|
|
|
12 |
- `path`: The folder location that contains the .mp4 files to be analyzed.
|
13 |
- `model_path`: The local path to the ONNX model or a URL pointing to the RTMO model published on mmpose.
|
14 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
tags:
|
5 |
- Pose Estimation
|
6 |
---
|
7 |
+
RTMO / YOLO-NAS-Pose Inference with CUDAExecutionProvider / TensorrtExecutionProvider DEMO
|
8 |
|
9 |
- `demo.sh`: DEMO main program, which will first install rtmlib, and then use rtmo-s to analyze the .mp4 files in the video folder.
|
10 |
+
- `demo_batch.sh`: Multi-batch version of demo.sh
|
11 |
+
- `rtmo_gpu.py`: Defines an RTMO_GPU (& RTMO_GPU_BATCH) class, making fine adjustments to CUDA & TensorRT settings.
|
12 |
+
- `rtmo_demo.py`: Python main program, which has three arguments:
|
13 |
- `path`: The folder location that contains the .mp4 files to be analyzed.
|
14 |
- `model_path`: The local path to the ONNX model or a URL pointing to the RTMO model published on mmpose.
|
15 |
+
- `--yolo_nas_pose`: If you run inference with YOLO NAS Pose Model instead of RTMO model.
|
16 |
+
- `rtmo_demo_batch.py`: Multi-batch version of demo_batch.sh
|
17 |
+
- `video`: Contains one test video.
|
18 |
+
|
19 |
+
Original ONNX models come from [](https://github.com/open-mmlab/mmpose/tree/main/projects/rtmo) trained on body7. We did only
|
20 |
+
|
21 |
+
We did the following to make them work with TensorRTExecutionProvdier
|
22 |
+
|
23 |
+
1. Shape inference
|
24 |
+
2. batch size 1,2,4 fixation
|
25 |
+
|
26 |
+
Note: TensorrtExecutionProvider only supports Models with fixed batch size (*_batchN.onnx) while CUDAExecutionProvider can run with dynamic batch size.
|
27 |
+
|
28 |
+
FP16 ONNX model is also provided.
|