File size: 5,085 Bytes
127e10d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
&&&& RUNNING TensorRT.trtexec [TensorRT v8502] # /usr/src/tensorrt/bin/trtexec --onnx=yolo_nas_pose_l_int8.onnx --fp16 --avgRuns=100 --duration=15 --saveEngine=yolo_nas_pose_l_int8.onnx.fp16.engine
[12/28/2023-19:27:25] [I] === Model Options ===
[12/28/2023-19:27:25] [I] Format: ONNX
[12/28/2023-19:27:25] [I] Model: yolo_nas_pose_l_int8.onnx
[12/28/2023-19:27:25] [I] Output:
[12/28/2023-19:27:25] [I] === Build Options ===
[12/28/2023-19:27:25] [I] Max batch: explicit batch
[12/28/2023-19:27:25] [I] Memory Pools: workspace: default, dlaSRAM: default, dlaLocalDRAM: default, dlaGlobalDRAM: default
[12/28/2023-19:27:25] [I] minTiming: 1
[12/28/2023-19:27:25] [I] avgTiming: 8
[12/28/2023-19:27:25] [I] Precision: FP32+FP16
[12/28/2023-19:27:25] [I] LayerPrecisions: 
[12/28/2023-19:27:25] [I] Calibration: 
[12/28/2023-19:27:25] [I] Refit: Disabled
[12/28/2023-19:27:25] [I] Sparsity: Disabled
[12/28/2023-19:27:25] [I] Safe mode: Disabled
[12/28/2023-19:27:25] [I] DirectIO mode: Disabled
[12/28/2023-19:27:25] [I] Restricted mode: Disabled
[12/28/2023-19:27:25] [I] Build only: Disabled
[12/28/2023-19:27:25] [I] Save engine: yolo_nas_pose_l_int8.onnx.fp16.engine
[12/28/2023-19:27:25] [I] Load engine: 
[12/28/2023-19:27:25] [I] Profiling verbosity: 0
[12/28/2023-19:27:25] [I] Tactic sources: Using default tactic sources
[12/28/2023-19:27:25] [I] timingCacheMode: local
[12/28/2023-19:27:25] [I] timingCacheFile: 
[12/28/2023-19:27:25] [I] Heuristic: Disabled
[12/28/2023-19:27:25] [I] Preview Features: Use default preview flags.
[12/28/2023-19:27:25] [I] Input(s)s format: fp32:CHW
[12/28/2023-19:27:25] [I] Output(s)s format: fp32:CHW
[12/28/2023-19:27:25] [I] Input build shapes: model
[12/28/2023-19:27:25] [I] Input calibration shapes: model
[12/28/2023-19:27:25] [I] === System Options ===
[12/28/2023-19:27:25] [I] Device: 0
[12/28/2023-19:27:25] [I] DLACore: 
[12/28/2023-19:27:25] [I] Plugins:
[12/28/2023-19:27:25] [I] === Inference Options ===
[12/28/2023-19:27:25] [I] Batch: Explicit
[12/28/2023-19:27:25] [I] Input inference shapes: model
[12/28/2023-19:27:25] [I] Iterations: 10
[12/28/2023-19:27:25] [I] Duration: 15s (+ 200ms warm up)
[12/28/2023-19:27:25] [I] Sleep time: 0ms
[12/28/2023-19:27:25] [I] Idle time: 0ms
[12/28/2023-19:27:25] [I] Streams: 1
[12/28/2023-19:27:25] [I] ExposeDMA: Disabled
[12/28/2023-19:27:25] [I] Data transfers: Enabled
[12/28/2023-19:27:25] [I] Spin-wait: Disabled
[12/28/2023-19:27:25] [I] Multithreading: Disabled
[12/28/2023-19:27:25] [I] CUDA Graph: Disabled
[12/28/2023-19:27:25] [I] Separate profiling: Disabled
[12/28/2023-19:27:25] [I] Time Deserialize: Disabled
[12/28/2023-19:27:25] [I] Time Refit: Disabled
[12/28/2023-19:27:25] [I] NVTX verbosity: 0
[12/28/2023-19:27:25] [I] Persistent Cache Ratio: 0
[12/28/2023-19:27:25] [I] Inputs:
[12/28/2023-19:27:25] [I] === Reporting Options ===
[12/28/2023-19:27:25] [I] Verbose: Disabled
[12/28/2023-19:27:25] [I] Averages: 100 inferences
[12/28/2023-19:27:25] [I] Percentiles: 90,95,99
[12/28/2023-19:27:25] [I] Dump refittable layers:Disabled
[12/28/2023-19:27:25] [I] Dump output: Disabled
[12/28/2023-19:27:25] [I] Profile: Disabled
[12/28/2023-19:27:25] [I] Export timing to JSON file: 
[12/28/2023-19:27:25] [I] Export output to JSON file: 
[12/28/2023-19:27:25] [I] Export profile to JSON file: 
[12/28/2023-19:27:25] [I] 
[12/28/2023-19:27:25] [I] === Device Information ===
[12/28/2023-19:27:25] [I] Selected Device: Orin
[12/28/2023-19:27:25] [I] Compute Capability: 8.7
[12/28/2023-19:27:25] [I] SMs: 8
[12/28/2023-19:27:25] [I] Compute Clock Rate: 0.624 GHz
[12/28/2023-19:27:25] [I] Device Global Memory: 7471 MiB
[12/28/2023-19:27:25] [I] Shared Memory per SM: 164 KiB
[12/28/2023-19:27:25] [I] Memory Bus Width: 128 bits (ECC disabled)
[12/28/2023-19:27:25] [I] Memory Clock Rate: 0.624 GHz
[12/28/2023-19:27:25] [I] 
[12/28/2023-19:27:25] [I] TensorRT version: 8.5.2
[12/28/2023-19:27:26] [I] [TRT] [MemUsageChange] Init CUDA: CPU +220, GPU +0, now: CPU 249, GPU 2833 (MiB)
[12/28/2023-19:27:28] [I] [TRT] [MemUsageChange] Init builder kernel library: CPU +302, GPU +284, now: CPU 574, GPU 3139 (MiB)
[12/28/2023-19:27:28] [I] Start parsing network model
[12/28/2023-19:27:29] [I] [TRT] ----------------------------------------------------------------
[12/28/2023-19:27:29] [I] [TRT] Input filename:   yolo_nas_pose_l_int8.onnx
[12/28/2023-19:27:29] [I] [TRT] ONNX IR version:  0.0.8
[12/28/2023-19:27:29] [I] [TRT] Opset version:    17
[12/28/2023-19:27:29] [I] [TRT] Producer name:    pytorch
[12/28/2023-19:27:29] [I] [TRT] Producer version: 2.1.2
[12/28/2023-19:27:29] [I] [TRT] Domain:           
[12/28/2023-19:27:29] [I] [TRT] Model version:    0
[12/28/2023-19:27:29] [I] [TRT] Doc string:       
[12/28/2023-19:27:29] [I] [TRT] ----------------------------------------------------------------
[12/28/2023-19:27:33] [I] Finish parsing network model
&&&& FAILED TensorRT.trtexec [TensorRT v8502] # /usr/src/tensorrt/bin/trtexec --onnx=yolo_nas_pose_l_int8.onnx --fp16 --avgRuns=100 --duration=15 --saveEngine=yolo_nas_pose_l_int8.onnx.fp16.engine