Luigi commited on
Commit
4d143ab
1 Parent(s): 3a59c70

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -12,9 +12,9 @@ tags:
12
  We offer a TensorRT model in various precisions including int8, fp16, fp32, and mixed, converted from Deci-AI's [YOLO-NAS-Pose](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS-POSE.md) pre-trained weights in PyTorch.
13
  This model is compatible with Jetson Orin Nano hardware.
14
 
15
- Note that all quantization that has been introduced in the conversion is purely static, meaning that the corresponding model has potentillay bad accuracy compared to the original one.
16
 
17
- Todo: use [coco-pose-2017](https://huggingface.co/datasets/Mai0313/coco-pose-2017) dataset to calibrate int8 model
18
 
19
  More information on calibration for post-training quantization, check [this slide](https://on-demand.gputechconf.com/gtc/2017/presentation/s7310-8-bit-inference-with-tensorrt.pdf)
20
 
 
12
  We offer a TensorRT model in various precisions including int8, fp16, fp32, and mixed, converted from Deci-AI's [YOLO-NAS-Pose](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS-POSE.md) pre-trained weights in PyTorch.
13
  This model is compatible with Jetson Orin Nano hardware.
14
 
15
+ ~~Note that all quantization that has been introduced in the conversion is purely static, meaning that the corresponding model has potentillay bad accuracy compared to the original one.~~
16
 
17
+ Todo: ~~use [cppe-5](https://huggingface.co/datasets/cppe-5) dataset to calibrate int8 model~~
18
 
19
  More information on calibration for post-training quantization, check [this slide](https://on-demand.gputechconf.com/gtc/2017/presentation/s7310-8-bit-inference-with-tensorrt.pdf)
20