Luigi commited on
Commit
96d8b84
1 Parent(s): 4d143ab

Remove warning on non-calibrated INT8 model's poentially bad accuracy

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -3,14 +3,15 @@ pipeline_tag: object-detection
3
  datasets:
4
  - Mai0313/coco-pose-2017
5
  tags:
6
- - TensorRT
7
  - Pose Estimation
8
  - YOLO-NAS-Pose
9
  - Jetson Orin
 
 
10
  ---
11
 
12
  We offer a TensorRT model in various precisions including int8, fp16, fp32, and mixed, converted from Deci-AI's [YOLO-NAS-Pose](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS-POSE.md) pre-trained weights in PyTorch.
13
- This model is compatible with Jetson Orin Nano hardware.
14
 
15
  ~~Note that all quantization that has been introduced in the conversion is purely static, meaning that the corresponding model has potentillay bad accuracy compared to the original one.~~
16
 
 
3
  datasets:
4
  - Mai0313/coco-pose-2017
5
  tags:
 
6
  - Pose Estimation
7
  - YOLO-NAS-Pose
8
  - Jetson Orin
9
+ - JetPack 5.1.1
10
+ - TensorRT 8.5.2
11
  ---
12
 
13
  We offer a TensorRT model in various precisions including int8, fp16, fp32, and mixed, converted from Deci-AI's [YOLO-NAS-Pose](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS-POSE.md) pre-trained weights in PyTorch.
14
+ This (TensorRT) model is compatible with JetPack 5.1.1, benchmarked and tested on Jetson Orin Nano Deveoper Kit.
15
 
16
  ~~Note that all quantization that has been introduced in the conversion is purely static, meaning that the corresponding model has potentillay bad accuracy compared to the original one.~~
17