Edit model card

YOLOv5: Target Detection

Yolov5 is a one-stage structure target detection network framework, in which the main structure consists of 4 parts, including the network backbone composed of modified CSPNet, the high-resolution feature fusion module composed of FPN (Feature Paramid Network), composed of SPP (Spatial Pyramid Pooling) constitutes a pooling module, and three different detection heads are used to detect targets of different sizes.

The YOLOv5 model can be found here

CONTENTS

Source Model

The steps followed the yolov5 tutorials to get the source model in ONNX format.

The source model YOLOv5s.onnx also can be found here.

Environment Preparation

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Export to ONNX

python export.py --weights yolov5s.pt --include torchscript onnx --opset 12

Performance

🧰QCS6490
Device Runtime Model Size (pixels) Inference Time (ms) Precision Compute Unit Model Download
AidBox QCS6490 QNN YOLOv5s(cutoff) 640 6.7 INT8 NPU model download
AidBox QCS6490 QNN YOLOv5s(cutoff) 640 15.2 INT16 NPU model download
AidBox QCS6490 SNPE YOLOv5s(cutoff) 640 5.5 INT8 NPU model download
AidBox QCS6490 SNPE YOLOv5s(cutoff) 640 13.4 INT16 NPU model download
🧰QCS8550
Device Runtime Model Size (pixels) Inference Time (ms) Precision Compute Unit Model Download
APLUX QCS8550 QNN YOLOv5s(cutoff) 640 4.1 INT8 NPU model download
APLUX QCS8550 QNN YOLOv5s(cutoff) 640 13.4 INT16 NPU model download
APLUX QCS8550 SNPE YOLOv5s(cutoff) 640 2.3 INT8 NPU model download
APLUX QCS8550 SNPE YOLOv5s(cutoff) 640 5.8 INT16 NPU model download

Model Conversion

Demo models converted from AIMO(AI Model Optimizier).

The demo model conversion step on AIMO can be found blow:

🧰QCS6490
Device Runtime Model Size (pixels) Precision Compute Unit AIMO Conversion Steps
AidBox QCS6490 QNN YOLOv5s(cutoff) 640 INT8 NPU view steps
AidBox QCS6490 QNN YOLOv5s(cutoff) 640 INT16 NPU view steps
AidBox QCS6490 SNPE YOLOv5s(cutoff) 640 INT8 NPU view steps
AidBox QCS6490 SNPE YOLOv5s(cutoff) 640 INT16 NPU view steps
🧰QCS8550
Device Runtime Model Size (pixels) Precision Compute Unit AIMO Conversion Steps
APLUX QCS8550 QNN YOLOv5s(cutoff) 640 INT8 NPU view steps
APLUX QCS8550 QNN YOLOv5s(cutoff) 640 INT16 NPU view steps
APLUX QCS8550 SNPE YOLOv5s(cutoff) 640 INT8 NPU view steps
APLUX QCS8550 SNPE YOLOv5s(cutoff) 640 INT16 NPU view steps

Tutorial

Step1: convert model

1.1 Prepare source model in onnx format. The source model can be found here or following Source Model to obtain.

1.2 Login AIMO and convert source model to target format. The model conversion step can follow AIMO Conversion Step in Model Conversion Sheet.

1.3 After conversion task done, download target model file.

note: you can skip convert model step, and directly download converted model in Performance Sheet.

Step2: install AidLite SDK

# install aidlite sdk c++ api 
sudo aid-pkg -i aidlite-sdk
# install aidlite sdk python api
python3 -m pip install pyaidlite -i https://mirrors.aidlux.com --trusted-host mirrors.aidlux.com

The developer document of AidLite SDK can be found here.

Step3: model inference

3.1 Download demo program

# download demo program
wget https://huggingface.co/aplux/YOLOv5/resolve/main/examples.zip
# unzip
unzip examples.zip

3.2 Modify model_path to your model path and run demo

# run qnn demo
python qnn_yolov5_multi.py
# run snpe demo
python snpe2_yolov5_multi.py
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .