YOLO-NAS-Pose-JetPack5 / yolo_nas_pose_l_fp32.onnx.int8.engine.log
Luigi's picture
Upload TensorRT & ONNX Model files
127e10d
raw
history blame
42.9 kB
&&&& RUNNING TensorRT.trtexec [TensorRT v8502] # /usr/src/tensorrt/bin/trtexec --onnx=yolo_nas_pose_l_fp32.onnx --int8 --avgRuns=100 --duration=15 --saveEngine=yolo_nas_pose_l_fp32.onnx.int8.engine
[12/28/2023-15:10:07] [I] === Model Options ===
[12/28/2023-15:10:07] [I] Format: ONNX
[12/28/2023-15:10:07] [I] Model: yolo_nas_pose_l_fp32.onnx
[12/28/2023-15:10:07] [I] Output:
[12/28/2023-15:10:07] [I] === Build Options ===
[12/28/2023-15:10:07] [I] Max batch: explicit batch
[12/28/2023-15:10:07] [I] Memory Pools: workspace: default, dlaSRAM: default, dlaLocalDRAM: default, dlaGlobalDRAM: default
[12/28/2023-15:10:07] [I] minTiming: 1
[12/28/2023-15:10:07] [I] avgTiming: 8
[12/28/2023-15:10:07] [I] Precision: FP32+INT8
[12/28/2023-15:10:07] [I] LayerPrecisions:
[12/28/2023-15:10:07] [I] Calibration: Dynamic
[12/28/2023-15:10:07] [I] Refit: Disabled
[12/28/2023-15:10:07] [I] Sparsity: Disabled
[12/28/2023-15:10:07] [I] Safe mode: Disabled
[12/28/2023-15:10:07] [I] DirectIO mode: Disabled
[12/28/2023-15:10:07] [I] Restricted mode: Disabled
[12/28/2023-15:10:07] [I] Build only: Disabled
[12/28/2023-15:10:07] [I] Save engine: yolo_nas_pose_l_fp32.onnx.int8.engine
[12/28/2023-15:10:07] [I] Load engine:
[12/28/2023-15:10:07] [I] Profiling verbosity: 0
[12/28/2023-15:10:07] [I] Tactic sources: Using default tactic sources
[12/28/2023-15:10:07] [I] timingCacheMode: local
[12/28/2023-15:10:07] [I] timingCacheFile:
[12/28/2023-15:10:07] [I] Heuristic: Disabled
[12/28/2023-15:10:07] [I] Preview Features: Use default preview flags.
[12/28/2023-15:10:07] [I] Input(s)s format: fp32:CHW
[12/28/2023-15:10:07] [I] Output(s)s format: fp32:CHW
[12/28/2023-15:10:07] [I] Input build shapes: model
[12/28/2023-15:10:07] [I] Input calibration shapes: model
[12/28/2023-15:10:07] [I] === System Options ===
[12/28/2023-15:10:07] [I] Device: 0
[12/28/2023-15:10:07] [I] DLACore:
[12/28/2023-15:10:07] [I] Plugins:
[12/28/2023-15:10:07] [I] === Inference Options ===
[12/28/2023-15:10:07] [I] Batch: Explicit
[12/28/2023-15:10:07] [I] Input inference shapes: model
[12/28/2023-15:10:07] [I] Iterations: 10
[12/28/2023-15:10:07] [I] Duration: 15s (+ 200ms warm up)
[12/28/2023-15:10:07] [I] Sleep time: 0ms
[12/28/2023-15:10:07] [I] Idle time: 0ms
[12/28/2023-15:10:07] [I] Streams: 1
[12/28/2023-15:10:07] [I] ExposeDMA: Disabled
[12/28/2023-15:10:07] [I] Data transfers: Enabled
[12/28/2023-15:10:07] [I] Spin-wait: Disabled
[12/28/2023-15:10:07] [I] Multithreading: Disabled
[12/28/2023-15:10:07] [I] CUDA Graph: Disabled
[12/28/2023-15:10:07] [I] Separate profiling: Disabled
[12/28/2023-15:10:07] [I] Time Deserialize: Disabled
[12/28/2023-15:10:07] [I] Time Refit: Disabled
[12/28/2023-15:10:07] [I] NVTX verbosity: 0
[12/28/2023-15:10:07] [I] Persistent Cache Ratio: 0
[12/28/2023-15:10:07] [I] Inputs:
[12/28/2023-15:10:07] [I] === Reporting Options ===
[12/28/2023-15:10:07] [I] Verbose: Disabled
[12/28/2023-15:10:07] [I] Averages: 100 inferences
[12/28/2023-15:10:07] [I] Percentiles: 90,95,99
[12/28/2023-15:10:07] [I] Dump refittable layers:Disabled
[12/28/2023-15:10:07] [I] Dump output: Disabled
[12/28/2023-15:10:07] [I] Profile: Disabled
[12/28/2023-15:10:07] [I] Export timing to JSON file:
[12/28/2023-15:10:07] [I] Export output to JSON file:
[12/28/2023-15:10:07] [I] Export profile to JSON file:
[12/28/2023-15:10:07] [I]
[12/28/2023-15:10:07] [I] === Device Information ===
[12/28/2023-15:10:07] [I] Selected Device: Orin
[12/28/2023-15:10:07] [I] Compute Capability: 8.7
[12/28/2023-15:10:07] [I] SMs: 8
[12/28/2023-15:10:07] [I] Compute Clock Rate: 0.624 GHz
[12/28/2023-15:10:07] [I] Device Global Memory: 7471 MiB
[12/28/2023-15:10:07] [I] Shared Memory per SM: 164 KiB
[12/28/2023-15:10:07] [I] Memory Bus Width: 128 bits (ECC disabled)
[12/28/2023-15:10:07] [I] Memory Clock Rate: 0.624 GHz
[12/28/2023-15:10:07] [I]
[12/28/2023-15:10:07] [I] TensorRT version: 8.5.2
[12/28/2023-15:10:12] [I] [TRT] [MemUsageChange] Init CUDA: CPU +220, GPU +0, now: CPU 249, GPU 3019 (MiB)
[12/28/2023-15:10:17] [I] [TRT] [MemUsageChange] Init builder kernel library: CPU +302, GPU +286, now: CPU 574, GPU 3326 (MiB)
[12/28/2023-15:10:17] [I] Start parsing network model
[12/28/2023-15:10:20] [I] [TRT] ----------------------------------------------------------------
[12/28/2023-15:10:20] [I] [TRT] Input filename: yolo_nas_pose_l_fp32.onnx
[12/28/2023-15:10:20] [I] [TRT] ONNX IR version: 0.0.8
[12/28/2023-15:10:20] [I] [TRT] Opset version: 17
[12/28/2023-15:10:20] [I] [TRT] Producer name: pytorch
[12/28/2023-15:10:20] [I] [TRT] Producer version: 2.1.2
[12/28/2023-15:10:20] [I] [TRT] Domain:
[12/28/2023-15:10:20] [I] [TRT] Model version: 0
[12/28/2023-15:10:20] [I] [TRT] Doc string:
[12/28/2023-15:10:20] [I] [TRT] ----------------------------------------------------------------
[12/28/2023-15:10:21] [I] Finish parsing network model
[12/28/2023-15:10:21] [I] FP32 and INT8 precisions have been specified - more performance might be enabled by additionally specifying --fp16 or --best
[12/28/2023-15:10:21] [I] [TRT] ---------- Layers Running on DLA ----------
[12/28/2023-15:10:21] [I] [TRT] ---------- Layers Running on GPU ----------
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] TRAIN_STATION: [trainStation1]
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] MYELIN: {ForeignNode[/pre_process/pre_process.0/Cast.../pre_process/pre_process.2/Mul]}
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONSTANT: (Unnamed Layer* 455) [Constant]
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONSTANT: (Unnamed Layer* 456) [Constant]
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONSTANT: (Unnamed Layer* 457) [Constant]
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stem/conv/rbr_reparam/Conv + /model/backbone/stem/conv/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage1/downsample/rbr_reparam/Conv + /model/backbone/stage1/downsample/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage1/blocks/conv2/conv/Conv + /model/backbone/stage1/blocks/conv2/act/Relu || /model/backbone/stage1/blocks/conv1/conv/Conv + /model/backbone/stage1/blocks/conv1/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage1/blocks/bottlenecks/bottlenecks.0/cv1/rbr_reparam/Conv + /model/backbone/stage1/blocks/bottlenecks/bottlenecks.0/cv1/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage1/blocks/bottlenecks/bottlenecks.0/cv2/rbr_reparam/Conv + /model/backbone/stage1/blocks/bottlenecks/bottlenecks.0/cv2/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] POINTWISE: PWN(model.backbone.stage1.blocks.bottlenecks.0.alpha + (Unnamed Layer* 15) [Shuffle] + /model/backbone/stage1/blocks/bottlenecks/bottlenecks.0/Mul, /model/backbone/stage1/blocks/bottlenecks/bottlenecks.0/Add)
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage1/blocks/bottlenecks/bottlenecks.1/cv1/rbr_reparam/Conv + /model/backbone/stage1/blocks/bottlenecks/bottlenecks.1/cv1/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage1/blocks/bottlenecks/bottlenecks.1/cv2/rbr_reparam/Conv + /model/backbone/stage1/blocks/bottlenecks/bottlenecks.1/cv2/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] POINTWISE: PWN(model.backbone.stage1.blocks.bottlenecks.1.alpha + (Unnamed Layer* 23) [Shuffle] + /model/backbone/stage1/blocks/bottlenecks/bottlenecks.1/Mul, /model/backbone/stage1/blocks/bottlenecks/bottlenecks.1/Add)
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] COPY: /model/backbone/stage1/blocks/conv1/act/Relu_output_0 copy
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] COPY: /model/backbone/stage1/blocks/bottlenecks/bottlenecks.0/Add_output_0 copy
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] COPY: /model/backbone/stage1/blocks/conv2/act/Relu_output_0 copy
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage1/blocks/conv3/conv/Conv + /model/backbone/stage1/blocks/conv3/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck2/reduce_skip2/conv/Conv + /model/neck/neck2/reduce_skip2/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage2/downsample/rbr_reparam/Conv + /model/backbone/stage2/downsample/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck2/downsample/conv/Conv + /model/neck/neck2/downsample/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage2/blocks/conv2/conv/Conv + /model/backbone/stage2/blocks/conv2/act/Relu || /model/backbone/stage2/blocks/conv1/conv/Conv + /model/backbone/stage2/blocks/conv1/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage2/blocks/bottlenecks/bottlenecks.0/cv1/rbr_reparam/Conv + /model/backbone/stage2/blocks/bottlenecks/bottlenecks.0/cv1/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage2/blocks/bottlenecks/bottlenecks.0/cv2/rbr_reparam/Conv + /model/backbone/stage2/blocks/bottlenecks/bottlenecks.0/cv2/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] POINTWISE: PWN(model.backbone.stage2.blocks.bottlenecks.0.alpha + (Unnamed Layer* 44) [Shuffle] + /model/backbone/stage2/blocks/bottlenecks/bottlenecks.0/Mul, /model/backbone/stage2/blocks/bottlenecks/bottlenecks.0/Add)
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage2/blocks/bottlenecks/bottlenecks.1/cv1/rbr_reparam/Conv + /model/backbone/stage2/blocks/bottlenecks/bottlenecks.1/cv1/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage2/blocks/bottlenecks/bottlenecks.1/cv2/rbr_reparam/Conv + /model/backbone/stage2/blocks/bottlenecks/bottlenecks.1/cv2/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] POINTWISE: PWN(model.backbone.stage2.blocks.bottlenecks.1.alpha + (Unnamed Layer* 52) [Shuffle] + /model/backbone/stage2/blocks/bottlenecks/bottlenecks.1/Mul, /model/backbone/stage2/blocks/bottlenecks/bottlenecks.1/Add)
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage2/blocks/bottlenecks/bottlenecks.2/cv1/rbr_reparam/Conv + /model/backbone/stage2/blocks/bottlenecks/bottlenecks.2/cv1/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage2/blocks/bottlenecks/bottlenecks.2/cv2/rbr_reparam/Conv + /model/backbone/stage2/blocks/bottlenecks/bottlenecks.2/cv2/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] POINTWISE: PWN(model.backbone.stage2.blocks.bottlenecks.2.alpha + (Unnamed Layer* 60) [Shuffle] + /model/backbone/stage2/blocks/bottlenecks/bottlenecks.2/Mul, /model/backbone/stage2/blocks/bottlenecks/bottlenecks.2/Add)
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] COPY: /model/backbone/stage2/blocks/conv1/act/Relu_output_0 copy
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] COPY: /model/backbone/stage2/blocks/bottlenecks/bottlenecks.0/Add_output_0 copy
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] COPY: /model/backbone/stage2/blocks/bottlenecks/bottlenecks.1/Add_output_0 copy
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] COPY: /model/backbone/stage2/blocks/conv2/act/Relu_output_0 copy
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage2/blocks/conv3/conv/Conv + /model/backbone/stage2/blocks/conv3/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck1/reduce_skip2/conv/Conv + /model/neck/neck1/reduce_skip2/act/Relu || /model/neck/neck2/reduce_skip1/conv/Conv + /model/neck/neck2/reduce_skip1/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage3/downsample/rbr_reparam/Conv + /model/backbone/stage3/downsample/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck1/downsample/conv/Conv + /model/neck/neck1/downsample/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage3/blocks/conv2/conv/Conv + /model/backbone/stage3/blocks/conv2/act/Relu || /model/backbone/stage3/blocks/conv1/conv/Conv + /model/backbone/stage3/blocks/conv1/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage3/blocks/bottlenecks/bottlenecks.0/cv1/rbr_reparam/Conv + /model/backbone/stage3/blocks/bottlenecks/bottlenecks.0/cv1/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage3/blocks/bottlenecks/bottlenecks.0/cv2/rbr_reparam/Conv + /model/backbone/stage3/blocks/bottlenecks/bottlenecks.0/cv2/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] POINTWISE: PWN(model.backbone.stage3.blocks.bottlenecks.0.alpha + (Unnamed Layer* 83) [Shuffle] + /model/backbone/stage3/blocks/bottlenecks/bottlenecks.0/Mul, /model/backbone/stage3/blocks/bottlenecks/bottlenecks.0/Add)
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage3/blocks/bottlenecks/bottlenecks.1/cv1/rbr_reparam/Conv + /model/backbone/stage3/blocks/bottlenecks/bottlenecks.1/cv1/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage3/blocks/bottlenecks/bottlenecks.1/cv2/rbr_reparam/Conv + /model/backbone/stage3/blocks/bottlenecks/bottlenecks.1/cv2/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] POINTWISE: PWN(model.backbone.stage3.blocks.bottlenecks.1.alpha + (Unnamed Layer* 91) [Shuffle] + /model/backbone/stage3/blocks/bottlenecks/bottlenecks.1/Mul, /model/backbone/stage3/blocks/bottlenecks/bottlenecks.1/Add)
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage3/blocks/bottlenecks/bottlenecks.2/cv1/rbr_reparam/Conv + /model/backbone/stage3/blocks/bottlenecks/bottlenecks.2/cv1/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage3/blocks/bottlenecks/bottlenecks.2/cv2/rbr_reparam/Conv + /model/backbone/stage3/blocks/bottlenecks/bottlenecks.2/cv2/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] POINTWISE: PWN(model.backbone.stage3.blocks.bottlenecks.2.alpha + (Unnamed Layer* 99) [Shuffle] + /model/backbone/stage3/blocks/bottlenecks/bottlenecks.2/Mul, /model/backbone/stage3/blocks/bottlenecks/bottlenecks.2/Add)
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage3/blocks/bottlenecks/bottlenecks.3/cv1/rbr_reparam/Conv + /model/backbone/stage3/blocks/bottlenecks/bottlenecks.3/cv1/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage3/blocks/bottlenecks/bottlenecks.3/cv2/rbr_reparam/Conv + /model/backbone/stage3/blocks/bottlenecks/bottlenecks.3/cv2/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] POINTWISE: PWN(model.backbone.stage3.blocks.bottlenecks.3.alpha + (Unnamed Layer* 107) [Shuffle] + /model/backbone/stage3/blocks/bottlenecks/bottlenecks.3/Mul, /model/backbone/stage3/blocks/bottlenecks/bottlenecks.3/Add)
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage3/blocks/bottlenecks/bottlenecks.4/cv1/rbr_reparam/Conv + /model/backbone/stage3/blocks/bottlenecks/bottlenecks.4/cv1/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage3/blocks/bottlenecks/bottlenecks.4/cv2/rbr_reparam/Conv + /model/backbone/stage3/blocks/bottlenecks/bottlenecks.4/cv2/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] POINTWISE: PWN(model.backbone.stage3.blocks.bottlenecks.4.alpha + (Unnamed Layer* 115) [Shuffle] + /model/backbone/stage3/blocks/bottlenecks/bottlenecks.4/Mul, /model/backbone/stage3/blocks/bottlenecks/bottlenecks.4/Add)
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] COPY: /model/backbone/stage3/blocks/conv1/act/Relu_output_0 copy
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] COPY: /model/backbone/stage3/blocks/bottlenecks/bottlenecks.0/Add_output_0 copy
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] COPY: /model/backbone/stage3/blocks/bottlenecks/bottlenecks.1/Add_output_0 copy
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] COPY: /model/backbone/stage3/blocks/bottlenecks/bottlenecks.2/Add_output_0 copy
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] COPY: /model/backbone/stage3/blocks/bottlenecks/bottlenecks.3/Add_output_0 copy
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] COPY: /model/backbone/stage3/blocks/conv2/act/Relu_output_0 copy
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage3/blocks/conv3/conv/Conv + /model/backbone/stage3/blocks/conv3/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck1/reduce_skip1/conv/Conv + /model/neck/neck1/reduce_skip1/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage4/downsample/rbr_reparam/Conv + /model/backbone/stage4/downsample/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage4/blocks/conv2/conv/Conv + /model/backbone/stage4/blocks/conv2/act/Relu || /model/backbone/stage4/blocks/conv1/conv/Conv + /model/backbone/stage4/blocks/conv1/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage4/blocks/bottlenecks/bottlenecks.0/cv1/rbr_reparam/Conv + /model/backbone/stage4/blocks/bottlenecks/bottlenecks.0/cv1/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage4/blocks/bottlenecks/bottlenecks.0/cv2/rbr_reparam/Conv + /model/backbone/stage4/blocks/bottlenecks/bottlenecks.0/cv2/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] POINTWISE: PWN(model.backbone.stage4.blocks.bottlenecks.0.alpha + (Unnamed Layer* 134) [Shuffle] + /model/backbone/stage4/blocks/bottlenecks/bottlenecks.0/Mul, /model/backbone/stage4/blocks/bottlenecks/bottlenecks.0/Add)
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage4/blocks/bottlenecks/bottlenecks.1/cv1/rbr_reparam/Conv + /model/backbone/stage4/blocks/bottlenecks/bottlenecks.1/cv1/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage4/blocks/bottlenecks/bottlenecks.1/cv2/rbr_reparam/Conv + /model/backbone/stage4/blocks/bottlenecks/bottlenecks.1/cv2/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] POINTWISE: PWN(model.backbone.stage4.blocks.bottlenecks.1.alpha + (Unnamed Layer* 142) [Shuffle] + /model/backbone/stage4/blocks/bottlenecks/bottlenecks.1/Mul, /model/backbone/stage4/blocks/bottlenecks/bottlenecks.1/Add)
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] COPY: /model/backbone/stage4/blocks/conv1/act/Relu_output_0 copy
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] COPY: /model/backbone/stage4/blocks/bottlenecks/bottlenecks.0/Add_output_0 copy
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] COPY: /model/backbone/stage4/blocks/conv2/act/Relu_output_0 copy
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/stage4/blocks/conv3/conv/Conv + /model/backbone/stage4/blocks/conv3/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/context_module/cv1/conv/Conv + /model/backbone/context_module/cv1/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] POOLING: /model/backbone/context_module/m.2/MaxPool
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] POOLING: /model/backbone/context_module/m.1/MaxPool
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] POOLING: /model/backbone/context_module/m.0/MaxPool
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] COPY: /model/backbone/context_module/cv1/act/Relu_output_0 copy
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/backbone/context_module/cv2/conv/Conv + /model/backbone/context_module/cv2/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck1/conv/conv/Conv + /model/neck/neck1/conv/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] DECONVOLUTION: /model/neck/neck1/upsample/ConvTranspose
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck1/reduce_after_concat/conv/Conv + /model/neck/neck1/reduce_after_concat/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck1/blocks/conv2/conv/Conv + /model/neck/neck1/blocks/conv2/act/Relu || /model/neck/neck1/blocks/conv1/conv/Conv + /model/neck/neck1/blocks/conv1/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck1/blocks/bottlenecks/bottlenecks.0/cv1/rbr_reparam/Conv + /model/neck/neck1/blocks/bottlenecks/bottlenecks.0/cv1/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck1/blocks/bottlenecks/bottlenecks.0/cv2/rbr_reparam/Conv + /model/neck/neck1/blocks/bottlenecks/bottlenecks.0/cv2/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] POINTWISE: PWN(model.neck.neck1.blocks.bottlenecks.0.alpha + (Unnamed Layer* 171) [Shuffle] + /model/neck/neck1/blocks/bottlenecks/bottlenecks.0/Mul, /model/neck/neck1/blocks/bottlenecks/bottlenecks.0/Add)
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck1/blocks/bottlenecks/bottlenecks.1/cv1/rbr_reparam/Conv + /model/neck/neck1/blocks/bottlenecks/bottlenecks.1/cv1/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck1/blocks/bottlenecks/bottlenecks.1/cv2/rbr_reparam/Conv + /model/neck/neck1/blocks/bottlenecks/bottlenecks.1/cv2/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] POINTWISE: PWN(model.neck.neck1.blocks.bottlenecks.1.alpha + (Unnamed Layer* 179) [Shuffle] + /model/neck/neck1/blocks/bottlenecks/bottlenecks.1/Mul, /model/neck/neck1/blocks/bottlenecks/bottlenecks.1/Add)
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck1/blocks/bottlenecks/bottlenecks.2/cv1/rbr_reparam/Conv + /model/neck/neck1/blocks/bottlenecks/bottlenecks.2/cv1/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck1/blocks/bottlenecks/bottlenecks.2/cv2/rbr_reparam/Conv + /model/neck/neck1/blocks/bottlenecks/bottlenecks.2/cv2/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] POINTWISE: PWN(model.neck.neck1.blocks.bottlenecks.2.alpha + (Unnamed Layer* 187) [Shuffle] + /model/neck/neck1/blocks/bottlenecks/bottlenecks.2/Mul, /model/neck/neck1/blocks/bottlenecks/bottlenecks.2/Add)
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck1/blocks/bottlenecks/bottlenecks.3/cv1/rbr_reparam/Conv + /model/neck/neck1/blocks/bottlenecks/bottlenecks.3/cv1/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck1/blocks/bottlenecks/bottlenecks.3/cv2/rbr_reparam/Conv + /model/neck/neck1/blocks/bottlenecks/bottlenecks.3/cv2/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] POINTWISE: PWN(model.neck.neck1.blocks.bottlenecks.3.alpha + (Unnamed Layer* 195) [Shuffle] + /model/neck/neck1/blocks/bottlenecks/bottlenecks.3/Mul, /model/neck/neck1/blocks/bottlenecks/bottlenecks.3/Add)
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] COPY: /model/neck/neck1/blocks/conv2/act/Relu_output_0 copy
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck1/blocks/conv3/conv/Conv + /model/neck/neck1/blocks/conv3/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck2/conv/conv/Conv + /model/neck/neck2/conv/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] DECONVOLUTION: /model/neck/neck2/upsample/ConvTranspose
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] COPY: /model/neck/neck2/reduce_skip1/act/Relu_output_0 copy
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck2/reduce_after_concat/conv/Conv + /model/neck/neck2/reduce_after_concat/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck2/blocks/conv2/conv/Conv + /model/neck/neck2/blocks/conv2/act/Relu || /model/neck/neck2/blocks/conv1/conv/Conv + /model/neck/neck2/blocks/conv1/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck2/blocks/bottlenecks/bottlenecks.0/cv1/rbr_reparam/Conv + /model/neck/neck2/blocks/bottlenecks/bottlenecks.0/cv1/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck2/blocks/bottlenecks/bottlenecks.0/cv2/rbr_reparam/Conv + /model/neck/neck2/blocks/bottlenecks/bottlenecks.0/cv2/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] POINTWISE: PWN(model.neck.neck2.blocks.bottlenecks.0.alpha + (Unnamed Layer* 216) [Shuffle] + /model/neck/neck2/blocks/bottlenecks/bottlenecks.0/Mul, /model/neck/neck2/blocks/bottlenecks/bottlenecks.0/Add)
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck2/blocks/bottlenecks/bottlenecks.1/cv1/rbr_reparam/Conv + /model/neck/neck2/blocks/bottlenecks/bottlenecks.1/cv1/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck2/blocks/bottlenecks/bottlenecks.1/cv2/rbr_reparam/Conv + /model/neck/neck2/blocks/bottlenecks/bottlenecks.1/cv2/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] POINTWISE: PWN(model.neck.neck2.blocks.bottlenecks.1.alpha + (Unnamed Layer* 224) [Shuffle] + /model/neck/neck2/blocks/bottlenecks/bottlenecks.1/Mul, /model/neck/neck2/blocks/bottlenecks/bottlenecks.1/Add)
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck2/blocks/bottlenecks/bottlenecks.2/cv1/rbr_reparam/Conv + /model/neck/neck2/blocks/bottlenecks/bottlenecks.2/cv1/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck2/blocks/bottlenecks/bottlenecks.2/cv2/rbr_reparam/Conv + /model/neck/neck2/blocks/bottlenecks/bottlenecks.2/cv2/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] POINTWISE: PWN(model.neck.neck2.blocks.bottlenecks.2.alpha + (Unnamed Layer* 232) [Shuffle] + /model/neck/neck2/blocks/bottlenecks/bottlenecks.2/Mul, /model/neck/neck2/blocks/bottlenecks/bottlenecks.2/Add)
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck2/blocks/bottlenecks/bottlenecks.3/cv1/rbr_reparam/Conv + /model/neck/neck2/blocks/bottlenecks/bottlenecks.3/cv1/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck2/blocks/bottlenecks/bottlenecks.3/cv2/rbr_reparam/Conv + /model/neck/neck2/blocks/bottlenecks/bottlenecks.3/cv2/nonlinearity/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] POINTWISE: PWN(model.neck.neck2.blocks.bottlenecks.3.alpha + (Unnamed Layer* 240) [Shuffle] + /model/neck/neck2/blocks/bottlenecks/bottlenecks.3/Mul, /model/neck/neck2/blocks/bottlenecks/bottlenecks.3/Add)
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] COPY: /model/neck/neck2/blocks/conv2/act/Relu_output_0 copy
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck2/blocks/conv3/conv/Conv + /model/neck/neck2/blocks/conv3/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/heads/head1/bbox_stem/seq/conv/Conv + /model/heads/head1/bbox_stem/seq/act/Relu || /model/heads/head1/pose_stem/seq/conv/Conv + /model/heads/head1/pose_stem/seq/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck3/conv/conv/Conv + /model/neck/neck3/conv/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/heads/head1/reg_convs/reg_convs.0/seq/conv/Conv + /model/heads/head1/reg_convs/reg_convs.0/seq/act/Relu || /model/heads/head1/cls_convs/cls_convs.0/seq/conv/Conv + /model/heads/head1/cls_convs/cls_convs.0/seq/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/heads/head1/pose_convs/pose_convs.0/seq/conv/Conv + /model/heads/head1/pose_convs/pose_convs.0/seq/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck3/blocks/conv2/conv/Conv + /model/neck/neck3/blocks/conv2/act/Relu || /model/neck/neck3/blocks/conv1/conv/Conv + /model/neck/neck3/blocks/conv1/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/heads/head1/cls_pred/Conv
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/heads/head1/reg_pred/Conv
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/heads/head1/pose_convs/pose_convs.1/seq/conv/Conv + /model/heads/head1/pose_convs/pose_convs.1/seq/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck3/blocks/bottlenecks/bottlenecks.0/cv1/conv/Conv + /model/neck/neck3/blocks/bottlenecks/bottlenecks.0/cv1/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] SHUFFLE: /model/heads/Reshape + /model/heads/Transpose
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/heads/head1/pose_pred/Conv
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck3/blocks/bottlenecks/bottlenecks.0/cv2/conv/Conv + /model/neck/neck3/blocks/bottlenecks/bottlenecks.0/cv2/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] SOFTMAX: /model/heads/Softmax
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] POINTWISE: PWN(model.neck.neck3.blocks.bottlenecks.0.alpha + (Unnamed Layer* 271) [Shuffle] + /model/neck/neck3/blocks/bottlenecks/bottlenecks.0/Mul, /model/neck/neck3/blocks/bottlenecks/bottlenecks.0/Add)
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/heads/Conv
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck3/blocks/bottlenecks/bottlenecks.1/cv1/conv/Conv + /model/neck/neck3/blocks/bottlenecks/bottlenecks.1/cv1/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck3/blocks/bottlenecks/bottlenecks.1/cv2/conv/Conv + /model/neck/neck3/blocks/bottlenecks/bottlenecks.1/cv2/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] POINTWISE: PWN(model.neck.neck3.blocks.bottlenecks.1.alpha + (Unnamed Layer* 294) [Shuffle] + /model/neck/neck3/blocks/bottlenecks/bottlenecks.1/Mul, /model/neck/neck3/blocks/bottlenecks/bottlenecks.1/Add)
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck3/blocks/bottlenecks/bottlenecks.2/cv1/conv/Conv + /model/neck/neck3/blocks/bottlenecks/bottlenecks.2/cv1/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck3/blocks/bottlenecks/bottlenecks.2/cv2/conv/Conv + /model/neck/neck3/blocks/bottlenecks/bottlenecks.2/cv2/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] POINTWISE: PWN(model.neck.neck3.blocks.bottlenecks.2.alpha + (Unnamed Layer* 302) [Shuffle] + /model/neck/neck3/blocks/bottlenecks/bottlenecks.2/Mul, /model/neck/neck3/blocks/bottlenecks/bottlenecks.2/Add)
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck3/blocks/bottlenecks/bottlenecks.3/cv1/conv/Conv + /model/neck/neck3/blocks/bottlenecks/bottlenecks.3/cv1/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck3/blocks/bottlenecks/bottlenecks.3/cv2/conv/Conv + /model/neck/neck3/blocks/bottlenecks/bottlenecks.3/cv2/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] POINTWISE: PWN(model.neck.neck3.blocks.bottlenecks.3.alpha + (Unnamed Layer* 310) [Shuffle] + /model/neck/neck3/blocks/bottlenecks/bottlenecks.3/Mul, /model/neck/neck3/blocks/bottlenecks/bottlenecks.3/Add)
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] COPY: /model/neck/neck3/blocks/conv2/act/Relu_output_0 copy
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck3/blocks/conv3/conv/Conv + /model/neck/neck3/blocks/conv3/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/heads/head2/pose_stem/seq/conv/Conv + /model/heads/head2/pose_stem/seq/act/Relu || /model/heads/head2/bbox_stem/seq/conv/Conv + /model/heads/head2/bbox_stem/seq/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck4/conv/conv/Conv + /model/neck/neck4/conv/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/heads/head2/reg_convs/reg_convs.0/seq/conv/Conv + /model/heads/head2/reg_convs/reg_convs.0/seq/act/Relu || /model/heads/head2/cls_convs/cls_convs.0/seq/conv/Conv + /model/heads/head2/cls_convs/cls_convs.0/seq/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/heads/head2/pose_convs/pose_convs.0/seq/conv/Conv + /model/heads/head2/pose_convs/pose_convs.0/seq/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck4/blocks/conv2/conv/Conv + /model/neck/neck4/blocks/conv2/act/Relu || /model/neck/neck4/blocks/conv1/conv/Conv + /model/neck/neck4/blocks/conv1/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/heads/head2/cls_pred/Conv
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/heads/head2/reg_pred/Conv
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/heads/head2/pose_convs/pose_convs.1/seq/conv/Conv + /model/heads/head2/pose_convs/pose_convs.1/seq/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck4/blocks/bottlenecks/bottlenecks.0/cv1/conv/Conv + /model/neck/neck4/blocks/bottlenecks/bottlenecks.0/cv1/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] SHUFFLE: /model/heads/Reshape_4 + /model/heads/Transpose_3
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/heads/head2/pose_pred/Conv
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck4/blocks/bottlenecks/bottlenecks.0/cv2/conv/Conv + /model/neck/neck4/blocks/bottlenecks/bottlenecks.0/cv2/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] SOFTMAX: /model/heads/Softmax_1
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] POINTWISE: PWN(model.neck.neck4.blocks.bottlenecks.0.alpha + (Unnamed Layer* 341) [Shuffle] + /model/neck/neck4/blocks/bottlenecks/bottlenecks.0/Mul, /model/neck/neck4/blocks/bottlenecks/bottlenecks.0/Add)
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/heads/Conv_1
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck4/blocks/bottlenecks/bottlenecks.1/cv1/conv/Conv + /model/neck/neck4/blocks/bottlenecks/bottlenecks.1/cv1/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck4/blocks/bottlenecks/bottlenecks.1/cv2/conv/Conv + /model/neck/neck4/blocks/bottlenecks/bottlenecks.1/cv2/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] POINTWISE: PWN(model.neck.neck4.blocks.bottlenecks.1.alpha + (Unnamed Layer* 364) [Shuffle] + /model/neck/neck4/blocks/bottlenecks/bottlenecks.1/Mul, /model/neck/neck4/blocks/bottlenecks/bottlenecks.1/Add)
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck4/blocks/bottlenecks/bottlenecks.2/cv1/conv/Conv + /model/neck/neck4/blocks/bottlenecks/bottlenecks.2/cv1/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck4/blocks/bottlenecks/bottlenecks.2/cv2/conv/Conv + /model/neck/neck4/blocks/bottlenecks/bottlenecks.2/cv2/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] POINTWISE: PWN(model.neck.neck4.blocks.bottlenecks.2.alpha + (Unnamed Layer* 372) [Shuffle] + /model/neck/neck4/blocks/bottlenecks/bottlenecks.2/Mul, /model/neck/neck4/blocks/bottlenecks/bottlenecks.2/Add)
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck4/blocks/bottlenecks/bottlenecks.3/cv1/conv/Conv + /model/neck/neck4/blocks/bottlenecks/bottlenecks.3/cv1/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck4/blocks/bottlenecks/bottlenecks.3/cv2/conv/Conv + /model/neck/neck4/blocks/bottlenecks/bottlenecks.3/cv2/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] POINTWISE: PWN(model.neck.neck4.blocks.bottlenecks.3.alpha + (Unnamed Layer* 380) [Shuffle] + /model/neck/neck4/blocks/bottlenecks/bottlenecks.3/Mul, /model/neck/neck4/blocks/bottlenecks/bottlenecks.3/Add)
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] COPY: /model/neck/neck4/blocks/conv2/act/Relu_output_0 copy
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/neck/neck4/blocks/conv3/conv/Conv + /model/neck/neck4/blocks/conv3/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/heads/head3/bbox_stem/seq/conv/Conv + /model/heads/head3/bbox_stem/seq/act/Relu || /model/heads/head3/pose_stem/seq/conv/Conv + /model/heads/head3/pose_stem/seq/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/heads/head3/reg_convs/reg_convs.0/seq/conv/Conv + /model/heads/head3/reg_convs/reg_convs.0/seq/act/Relu || /model/heads/head3/cls_convs/cls_convs.0/seq/conv/Conv + /model/heads/head3/cls_convs/cls_convs.0/seq/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/heads/head3/pose_convs/pose_convs.0/seq/conv/Conv + /model/heads/head3/pose_convs/pose_convs.0/seq/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/heads/head3/cls_pred/Conv
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/heads/head3/reg_pred/Conv
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/heads/head3/pose_convs/pose_convs.1/seq/conv/Conv + /model/heads/head3/pose_convs/pose_convs.1/seq/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] SHUFFLE: /model/heads/Reshape_8 + /model/heads/Transpose_6
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/heads/head3/pose_convs/pose_convs.2/seq/conv/Conv + /model/heads/head3/pose_convs/pose_convs.2/seq/act/Relu
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] SOFTMAX: /model/heads/Softmax_2
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/heads/head3/pose_pred/Conv
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] CONVOLUTION: /model/heads/Conv_2
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] MYELIN: {ForeignNode[/model/heads/head1/Slice_1.../post_process/Reshape_2]}
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] NMS: batched_nms_26
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] DEVICE_TO_SHAPE_HOST: (Unnamed Layer* 459) [NMS]_1_output[DevicetoShapeHostCopy]
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] TRAIN_STATION: [trainStation2]
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] MYELIN: {ForeignNode[/model/heads/head1/Slice...graph2_/Concat_5]}
[12/28/2023-15:10:21] [I] [TRT] [GpuLayer] TRAIN_STATION: [trainStation3]
[12/28/2023-15:10:35] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +534, GPU +335, now: CPU 1351, GPU 3954 (MiB)
[12/28/2023-15:10:37] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +82, GPU +63, now: CPU 1433, GPU 4017 (MiB)
[12/28/2023-15:10:37] [I] [TRT] Local timing cache in use. Profiling results in this builder pass will not be stored.
[12/28/2023-16:08:54] [I] [TRT] Total Activation Memory: 7959592448
[12/28/2023-16:08:54] [I] [TRT] Detected 1 inputs and 1 output network tensors.
[12/28/2023-16:09:08] [I] [TRT] Total Host Persistent Memory: 331808
[12/28/2023-16:09:08] [I] [TRT] Total Device Persistent Memory: 38912
[12/28/2023-16:09:08] [I] [TRT] Total Scratch Memory: 134217728
[12/28/2023-16:09:08] [I] [TRT] [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 88 MiB, GPU 2461 MiB
[12/28/2023-16:09:08] [I] [TRT] [BlockAssignment] Started assigning block shifts. This will take 160 steps to complete.
[12/28/2023-16:09:08] [I] [TRT] [BlockAssignment] Algorithm ShiftNTopDown took 55.0722ms to assign 13 blocks to 160 nodes requiring 147361280 bytes.
[12/28/2023-16:09:08] [I] [TRT] Total Activation Memory: 147361280
[12/28/2023-16:09:13] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU -1, now: CPU 1835, GPU 5360 (MiB)
[12/28/2023-16:09:13] [I] [TRT] [MemUsageChange] TensorRT-managed allocation in building engine: CPU +52, GPU +64, now: CPU 52, GPU 64 (MiB)
[12/28/2023-16:09:14] [I] Engine built in 3546.25 sec.
[12/28/2023-16:09:14] [I] [TRT] Loaded engine size: 54 MiB
[12/28/2023-16:09:15] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 1299, GPU 5007 (MiB)
[12/28/2023-16:09:15] [I] [TRT] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +52, now: CPU 0, GPU 52 (MiB)
[12/28/2023-16:09:15] [I] Engine deserialized in 0.128069 sec.
[12/28/2023-16:09:15] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +1, GPU +0, now: CPU 1300, GPU 5007 (MiB)
[12/28/2023-16:09:15] [I] [TRT] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +140, now: CPU 0, GPU 192 (MiB)
[12/28/2023-16:09:15] [I] Setting persistentCacheLimit to 0 bytes.
[12/28/2023-16:09:15] [I] Using random values for input onnx::Cast_0
[12/28/2023-16:09:15] [I] Created input binding for onnx::Cast_0 with dimensions 1x3x640x640
[12/28/2023-16:09:15] [I] Using random values for output graph2_flat_predictions
[12/28/2023-16:09:15] [I] Created output binding for graph2_flat_predictions with dimensions -1x57
[12/28/2023-16:09:15] [I] Starting inference
[12/28/2023-16:09:30] [I] Warmup completed 3 queries over 200 ms
[12/28/2023-16:09:30] [I] Timing trace has 720 queries over 15.0291 s
[12/28/2023-16:09:30] [I]
[12/28/2023-16:09:30] [I] === Trace details ===
[12/28/2023-16:09:30] [I] Trace averages of 100 runs:
[12/28/2023-16:09:30] [I] Average on 100 runs - GPU latency: 20.9331 ms - Host latency: 21.0491 ms (enqueue 20.9849 ms)
[12/28/2023-16:09:30] [I] Average on 100 runs - GPU latency: 20.3949 ms - Host latency: 20.5041 ms (enqueue 20.4581 ms)
[12/28/2023-16:09:30] [I] Average on 100 runs - GPU latency: 20.9551 ms - Host latency: 21.0733 ms (enqueue 21.0198 ms)
[12/28/2023-16:09:30] [I] Average on 100 runs - GPU latency: 20.8394 ms - Host latency: 20.9537 ms (enqueue 20.8945 ms)
[12/28/2023-16:09:30] [I] Average on 100 runs - GPU latency: 20.4666 ms - Host latency: 20.5773 ms (enqueue 20.5432 ms)
[12/28/2023-16:09:30] [I] Average on 100 runs - GPU latency: 20.689 ms - Host latency: 20.8023 ms (enqueue 20.7502 ms)
[12/28/2023-16:09:30] [I] Average on 100 runs - GPU latency: 20.7472 ms - Host latency: 20.8614 ms (enqueue 20.8134 ms)
[12/28/2023-16:09:30] [I]
[12/28/2023-16:09:30] [I] === Performance summary ===
[12/28/2023-16:09:30] [I] Throughput: 47.9071 qps
[12/28/2023-16:09:30] [I] Latency: min = 19.4119 ms, max = 30.8398 ms, mean = 20.8421 ms, median = 20.7168 ms, percentile(90%) = 21.5566 ms, percentile(95%) = 22.2598 ms, percentile(99%) = 28.5723 ms
[12/28/2023-16:09:30] [I] Enqueue Time: min = 19.3838 ms, max = 30.7852 ms, mean = 20.7908 ms, median = 20.6725 ms, percentile(90%) = 21.4995 ms, percentile(95%) = 22.1094 ms, percentile(99%) = 28.5068 ms
[12/28/2023-16:09:30] [I] H2D Latency: min = 0.0800781 ms, max = 0.133301 ms, mean = 0.0950138 ms, median = 0.0957031 ms, percentile(90%) = 0.0981445 ms, percentile(95%) = 0.0986328 ms, percentile(99%) = 0.0996094 ms
[12/28/2023-16:09:30] [I] GPU Compute Time: min = 19.304 ms, max = 30.7158 ms, mean = 20.7286 ms, median = 20.5989 ms, percentile(90%) = 21.4419 ms, percentile(95%) = 22.1621 ms, percentile(99%) = 28.4521 ms
[12/28/2023-16:09:30] [I] D2H Latency: min = 0.00292969 ms, max = 0.0688477 ms, mean = 0.0184459 ms, median = 0.0166016 ms, percentile(90%) = 0.0273438 ms, percentile(95%) = 0.0288086 ms, percentile(99%) = 0.0444336 ms
[12/28/2023-16:09:30] [I] Total Host Walltime: 15.0291 s
[12/28/2023-16:09:30] [I] Total GPU Compute Time: 14.9246 s
[12/28/2023-16:09:30] [I] Explanations of the performance metrics are printed in the verbose logs.
[12/28/2023-16:09:30] [I]
&&&& PASSED TensorRT.trtexec [TensorRT v8502] # /usr/src/tensorrt/bin/trtexec --onnx=yolo_nas_pose_l_fp32.onnx --int8 --avgRuns=100 --duration=15 --saveEngine=yolo_nas_pose_l_fp32.onnx.int8.engine