Apollo PETRv1 模型优化方案一
一、背景介绍1、问题描述2、优化目标
二、核心概念与优化思路1、测试小结
三、详细操作步骤【按顺序执行】步骤 1: 在 X86 服务器上创建开发容器步骤 2: 安装额外依赖步骤 3: 编译 Paddle Inference 库步骤 4: 提取并转换 CNN Backbone 子图为 ONNX步骤 5: 提取非 CNN 后处理部分为 Paddle 模型步骤 6: 编译 TensorRT 推理程序(用于 Backbone)步骤 7: 验证混合推理的正确性【FP32】步骤 8: C++ 端到端(E2E)推理验证【FP32】A. 在 X86 开发容器中测试:B. 在 Orin-Apollo 园区版目标环境中测试:
步骤 9: 最终的精度测试(mAP 指标)1. 生成基准数据2. 用Paddle Inference FP32推理,生成推理输出3. 用推理的结果计算精度,生成精度基准4. 在Orin-Apollo环境中,运行混合精度推理,保存推理结果5. 用上面推理的结果计算精度
步骤 10: 清理工作
四、相关代码`1_gen_backbone.py“2_gen_postproc.py“3_run_backbone.cpp“4_run_postproc.py“5_run_e2e.cpp“6_run_e2e_acc.cpp“pred_eval.py“my_infer.py“my_eval.py`
Apollo PETRv1 模型优化方案一
一、背景介绍
1、问题描述
在
园区版的自动驾驶系统中,
Apollo
模块负责基于环视相机图像进行鸟瞰视角(BEV)下的目标检测,其核心是基于
camera_detection_bev
模型的深度学习推理。该模块最初采用
PETRv1
进行推理,但在尝试开启
Paddle Inference
加速时失败,无法正常运行。
TensorRT(TRT)
在 NVIDIA Orin 平台上,该模型使用 FP32(单精度浮点数)推理时单帧耗时高达 594 ms,无法满足实时性要求(目标帧率 10 FPS,即单帧耗时需低于 100 ms)。
2、优化目标
本文旨在解决上述问题,具体目标包括:
精度评估:尝试降低模型的计算精度(如采用 FP16 半精度或 INT8 整型),并评估这些改动在标准数据集上对检测精度的影响。性能优化:尝试使用 TensorRT 推理模型中的最大子图(计算最密集的部分),并与原版 Paddle Inference 方案进行性能对比评测。
二、核心概念与优化思路
在深入操作步骤前,我们先理解几个关键概念和本次优化的核心思路。
1. 为什么TRT加速会失败?
复杂的模型(尤其是包含大量自定义算子或动态形状的模型)可能无法被 TensorRT 完全解析并生成一个单一的、端到端的优化引擎。
模型可能就属于这种情况。
PETRv1
2. 什么是“子图划分”?
这是一种常见的优化策略。当一个完整的模型无法被某个推理引擎(如 TRT)完全支持时,我们可以将其“切碎”。
CNN Backbone(主干网络):模型的前半部分,通常是卷积神经网络(CNN),负责从图像中提取特征。这部分结构规整,非常适合被 TensorRT 优化。非CNN后处理(Post-process):模型的后半部分,可能包含 Transformer、解码器、非极大值抑制(NMS)等复杂操作。这部分可能包含一些 TRT 不友好或自定义的算子。我们将模型拆成这两个子图,分别用最适合它们的引擎进行推理。
3. 混合精度推理是什么?
神经网络推理并不总是需要最高的 FP32 精度。混合精度是指在模型的不同部分,使用不同的数值精度进行计算,在几乎不损失精度的前提下,大幅提升速度、降低显存占用。
FP16 (Half Precision):占用显存是 FP32 的一半,在支持 Tensor Cores 的 NVIDIA GPU 上速度极快。通常用于 CNN 特征提取等部分。INT8 (Integer Precision):显存占用再减半,速度更快,但需要对权重和激活值进行校准(Calibration),可能带来轻微的精度损失。对精度敏感的后处理部分慎用。FP32 (Full Precision):保留最高精度,用于对数值精度非常敏感的计算部分。
我们的优化方案正是基于以上思路:
将
模型切分成 CNN Backbone 和 非CNN后处理 两个子模型。CNN Backbone 部分使用 TensorRT 进行加速,并尝试采用
PETRv1
和
INT8
混合精度。非CNN后处理 部分继续使用
FP16
进行推理,采用
Paddle Inference
和
FP16
混合精度以确保精度。最后将两个引擎的输出拼接起来,完成端到端(E2E)的推理。
FP32
1、测试小结
经过上述优化,我们在 Orin 平台的 Apollo 园区版中取得了显著成果:
性能:端到端推理耗时大幅降低至 159.358 ms (其中 TensorRT 部分仅 39.0737 ms),但未及10 FPS 的目标,还需继续优化精度:在标准数据集上的检测精度(mAP)不仅没有损失,反而从基准的
略微提升至
0.3415
,证明了优化方案的有效性和稳定性。
0.3503
三、详细操作步骤【按顺序执行】
以下步骤将引导您完成整个优化流程,包括环境搭建、模型转换、引擎编译、推理验证和精度测试。
步骤 1: 在 X86 服务器上创建开发容器
作用:提供一个包含所有必要依赖(CUDA, cuDNN, TensorRT, PaddlePaddle 等)的隔离环境,用于模型转换、编译和初步验证。
cd /home/apollo
# 清理可能存在的旧容器
docker stop bev_opt_ver1
docker rm bev_opt_ver1
# 创建并启动新容器
# --gpus all: 允许容器访问所有GPU
# --shm-size=128g: 设置共享内存大小,处理大模型时可能需要
# -v $PWD:/home: 将当前目录挂载到容器的 /home 目录,方便文件交换
docker run --gpus all --shm-size=128g -id -e NVIDIA_VISIBLE_DEVICES=all
--privileged --net=host
-v $PWD:/home -w /home
--name=bev_opt_ver1 registry.baidubce.com/paddlepaddle/paddle:2.4.2-gpu-cuda11.7-cudnn8.4-trt8.4 /bin/bash
docker start bev_opt_ver1
docker exec -ti bev_opt_ver1 bash
步骤 2: 安装额外依赖
在容器内安装模型转换和编译所需的工具。
# 安装 Paddle 模型转 ONNX 格式的工具
pip install paddle2onnx -i https://pypi.tuna.tsinghua.edu.cn/simple
# 安装线性数学库 OpenBLAS,加速计算
apt install libopenblas-dev -y
# 下载并安装特定版本的 CMake(编译工具)
wget https://github.com/Kitware/CMake/releases/download/v3.15.4/cmake-3.15.4-Linux-x86_64.sh
bash cmake-3.15.4-Linux-x86_64.sh --prefix=/usr/local --skip-license
步骤 3: 编译 Paddle Inference 库
我们需要从源码编译指定版本的PaddlePaddle,以生成适用于我们环境的推理库。
cd /home
# 1. 克隆 PaddlePaddle 源码仓库
git clone https://github.com/PaddlePaddle/Paddle.git
cd Paddle/
# 2. 安装 Python 依赖
pip install -r /home/Paddle/python/requirements.txt
# 3. 切换到 2.4 发布分支
git checkout release/2.4
git submodule update --init --recursive
# 4. 创建并进入编译目录
rm -rf build_cuda
mkdir -p /home/paddleinference-2.4.2
mkdir build_cuda && cd build_cuda
# 5. 使用 CMake 配置编译选项并编译安装
# -DON_INFER=ON: 仅编译推理库,节省时间
/usr/local/bin/cmake .. -DCMAKE_INSTALL_PREFIX=/home/paddleinference-2.4.2
-DPY_VERSION=3.7 -DWITH_TESTING=OFF -DWITH_MKL=OFF -DWITH_GPU=ON -DON_INFER=ON
make install
# 6. 拷贝编译好的库文件到目标目录
cp ./paddle_inference_install_dir/paddle/* /home/paddleinference-2.4.2/ -rf
步骤 4: 提取并转换 CNN Backbone 子图为 ONNX
作用:将模型中适合 TRT 加速的 CNN 部分提取出来,并转换为 ONNX 格式,这是转换为 TRT 引擎的中间步骤。
cd /home/bev_opt_ver1
# 创建目录存放提取出的 Backbone 模型
rm -rf petrv1_backbone
mkdir -p petrv1_backbone
# 运行 Python 脚本,根据原模型和定义好的输入输出节点名称,截取出 Backbone 部分
python3 1_gen_backbone.py
# 使用 paddle2onnx 工具将 Paddle 格式的 Backbone 模型转换为 ONNX 格式
paddle2onnx
--model_dir petrv1_backbone
--model_filename petr_inference.pdmodel
--params_filename petr_inference.pdiparams
--save_file petrv1_backbone.onnx
--opset_version 12
--enable_dev_version True
步骤 5: 提取非 CNN 后处理部分为 Paddle 模型
作用:将模型中剩下的后处理部分也单独提取出来,保留为 Paddle 模型格式,后续使用 Paddle Inference 进行推理。
rm -rf petrv1_postproc
mkdir -p petrv1_postproc
# 运行另一个 Python 脚本,截取出后处理部分
python3 2_gen_postproc.py
步骤 6: 编译 TensorRT 推理程序(用于 Backbone)
作用:编译一个 C++ 程序,该程序负责加载我们转换好的 ONNX 模型,并构建和运行 TensorRT 引擎来进行推理。
# 使用 g++ 编译器编译 C++ 源码
g++ -std=c++11 -Wno-deprecated-declarations -O2 -o 3_run_backbone 3_run_backbone.cpp
-I /usr/local/cuda/include -L /usr/local/cuda/lib64
-lcudart -ldl -lpthread -lnvinfer -lnvinfer_plugin
-lnvonnxparser -lnvcaffe_parser -lopenblas
步骤 7: 验证混合推理的正确性【FP32】
作用:在 FP32 精度下,验证我们“TRT推理Backbone + Paddle推理后处理”的方案与原版“纯Paddle推理”的结果是否一致(通过余弦相似度、误差等指标判断)。
# 删除可能存在的旧 TRT 引擎计划文件
rm petrv1_backbone0.engine -f
# 运行 TRT Backbone 推理,会生成引擎文件并输出推理结果和耗时
./3_run_backbone
# 设置环境变量,确保使用真正的 FP32 计算而非 TF32
export NVIDIA_TF32_OVERRIDE=0
# 运行 Python 脚本,加载 TRT 的输出,并用 Paddle 推理后处理,同时对比最终结果与基准的差异
python3 4_run_postproc.py
输出
====================Backbone======================
推理完成,耗时: 51.1274 ms
==================================================
张量: trt_backbone_out | 元素个数: 1536000
余弦相似度: 0.99999
最大绝对误差: 9.53674e-06
最大相对误差: 5.98412
均方误差(MSE): 2.44707e-13
====================后处理========================
iter:0 Host-latency(ms):4232.254
iter:1 Host-latency(ms):32.914
Output1 shape: (300, 9)
Output2 shape: (300,)
Output3 shape: (300,)
==================================================
形状: (300, 9)
余弦相似度: 1.000000
均方误差(MSE): 0.000000
==================================================
形状: (300,)
余弦相似度: 1.000000
均方误差(MSE): 0.000000
==================================================
形状: (300,)
余弦相似度: 1.000000
均方误差(MSE): 0.000000
==================================================
步骤 8: C++ 端到端(E2E)推理验证【FP32】
作用:将步骤7中的两个独立步骤整合到一个 C++ 程序中,并再次验证正确性。需要在两种环境下测试:
A. 在 X86 开发容器中测试:
g++ -std=c++17 -ggdb -Wno-deprecated-declarations -o 5_run_e2e 5_run_e2e.cpp -I /usr/local/cuda/include
-I /home/paddleinference-2.4.2/include
-L /usr/local/cuda/lib64
-lcudart -ldl -lpthread -lnvinfer -lnvinfer_plugin
-lnvonnxparser -lnvcaffe_parser -lopenblas
/home/paddleinference-2.4.2/lib/libpaddle_inference.so -Wl,-rpath=/home/paddleinference-2.4.2/lib
rm petrv1_backbone.engine -f
./5_run_e2e
输出
TrtInfer耗时: 48.9677 ms PaddleInfer耗时: 23.1573 ms 4 推理耗时: 77.2237 ms
==================================================
张量: boxes3d | 元素个数: 2700
余弦相似度: 1
最大绝对误差: 0.000136614
最大相对误差: 0.00298633
均方误差(MSE): 7.89871e-11
==================================================
张量: scores | 元素个数: 300
余弦相似度: 1
最大绝对误差: 6.73532e-06
最大相对误差: 4.03657e-05
均方误差(MSE): 9.89381e-13
==================================================
张量: labels | 元素个数: 300
余弦相似度: 1
最大绝对误差: 0
最大相对误差: 0
均方误差(MSE): 0
B. 在 Orin-Apollo 园区版目标环境中测试:
作用:在最终部署的真实硬件和系统环境中测试性能,这里的耗时才是最终指标。
g++ -std=c++17 -ggdb -Wno-deprecated-declarations -o 5_run_e2e 5_run_e2e.cpp -I /usr/local/cuda/include
-I /apollo_workspace/.cache/bazel/679551712d2357b63e6e0ce858ebf90e/external/paddleinference-aarch64/paddle/include/
-L /usr/local/cuda/lib64
-lcudart -ldl -lpthread -lnvinfer -lnvinfer_plugin
-lnvonnxparser -lnvcaffe_parser -lopenblas
/opt/apollo/neo/packages/3rd-paddleinference/latest/lib/libpaddle_inference.so
-Wl,-rpath=/opt/apollo/neo/packages/3rd-paddleinference/latest/lib/
rm petrv1_backbone.engine -f
export NVIDIA_TF32_OVERRIDE=0
./5_run_e2e
输出
TrtInfer耗时: 417.063 ms PaddleInfer耗时: 147.728 ms 推理耗时: 570.474 ms
==================================================
张量: boxes3d | 元素个数: 2700
余弦相似度: 1
最大绝对误差: 0.000250578
最大相对误差: 0.00637684
均方误差(MSE): 1.30645e-10
==================================================
张量: scores | 元素个数: 300
余弦相似度: 1
最大绝对误差: 7.03335e-06
最大相对误差: 7.25931e-05
均方误差(MSE): 1.03461e-12
==================================================
张量: labels | 元素个数: 300
余弦相似度: 0.999999
最大绝对误差: 0
最大相对误差: 0
均方误差(MSE): 0
步骤 9: 最终的精度测试(mAP 指标)
作用:在标准数据集上定量评估优化前后的模型精度(mAP, mean Average Precision),这是衡量优化是否成功的关键指标。
1. 生成基准数据
在 Paddle3D 的评估环境中,保存测试数据的预处理输入和原始 Paddle FP32 推理的基准结果。
cd /home/Paddle3D
rm -rf model_iodata
mkdir -p model_iodata
python3 my_eval.py
2. 用Paddle Inference FP32推理,生成推理输出
cd /home/Paddle3D
python3 my_infer.py
3. 用推理的结果计算精度,生成精度基准
cd /home/Paddle3D
python3 pred_eval.py
输出
mAP: 0.3415
mATE: 0.7135
mASE: 0.4560
mAOE: 0.7077
mAVE: 0.8927
mAAE: 0.3029
NDS: 0.3635
Eval time: 5.8s
Per-class results:
Object Class AP ATE ASE AOE AVE AAE
car 0.608 0.511 0.161 0.142 0.234 0.056
truck 0.496 0.506 0.204 0.095 0.155 0.018
bus 0.553 0.532 0.075 0.565 2.377 0.151
trailer 0.000 1.000 1.000 1.000 1.000 1.000
construction_vehicle 0.000 1.000 1.000 1.000 1.000 1.000
pedestrian 0.507 0.683 0.248 0.693 0.501 0.197
motorcycle 0.487 0.660 0.313 1.230 0.070 0.000
bicycle 0.163 0.778 0.215 0.644 1.806 0.000
traffic_cone 0.602 0.465 0.344 nan nan nan
barrier 0.000 1.000 1.000 1.000 nan nan
4. 在Orin-Apollo环境中,运行混合精度推理,保存推理结果
g++ -std=c++17 -O3 -Wno-deprecated-declarations -o 6_run_e2e_acc 6_run_e2e_acc.cpp
-I /usr/local/cuda/include
-I /apollo_workspace/.cache/bazel/679551712d2357b63e6e0ce858ebf90e/external/paddleinference-aarch64/paddle/include/
-L /usr/local/cuda/lib64
-lcudart -ldl -lpthread -lnvinfer -lnvinfer_plugin
-lnvonnxparser -lnvcaffe_parser -lopenblas
/opt/apollo/neo/packages/3rd-paddleinference/latest/lib/libpaddle_inference.so
-Wl,-rpath=/opt/apollo/neo/packages/3rd-paddleinference/latest/lib/
rm petrv1_backbone.engine -f
sudo rm ../Paddle3D/model_iodata/*labels.bin -f
sudo rm ../Paddle3D/model_iodata/*scores.bin -f
sudo rm ../Paddle3D/model_iodata/*bboxes.bin -f
export NVIDIA_TF32_OVERRIDE=0
./6_run_e2e_acc
输出
TrtInfer耗时: 39.0737 ms; PaddleInfer耗时: 116.252 ms; E2E耗时: 159.358 ms;
5. 用上面推理的结果计算精度
cd /home/Paddle3D
python3 pred_eval.py
输出
mAP: 0.3503
mATE: 0.7134
mASE: 0.4547
mAOE: 0.7051
mAVE: 0.8761
mAAE: 0.3107
NDS: 0.3691
Eval time: 6.3s
Per-class results:
Object Class AP ATE ASE AOE AVE AAE
car 0.605 0.519 0.161 0.139 0.242 0.059
truck 0.489 0.535 0.203 0.095 0.168 0.018
bus 0.589 0.555 0.071 0.481 2.207 0.202
trailer 0.000 1.000 1.000 1.000 1.000 1.000
construction_vehicle 0.000 1.000 1.000 1.000 1.000 1.000
pedestrian 0.505 0.683 0.251 0.690 0.493 0.207
motorcycle 0.490 0.667 0.312 1.315 0.068 0.000
bicycle 0.201 0.703 0.202 0.627 1.831 0.000
traffic_cone 0.624 0.472 0.347 nan nan nan
barrier 0.000 1.000 1.000 1.000 nan nan
步骤 10: 清理工作
rm *.engine *.onnx *.cache *.bin -f
rm 3_run_backbone 5_run_e2e -f
rm petrv1_* -rf
四、相关代码
以下代码是每一步的尝试,存在冗余
1_gen_backbone.py
1_gen_backbone.py
import argparse
import sys
import numpy as np
def new_prepend_feed_ops(inference_program,
feed_target_names,
feed_holder_name='feed'):
import paddle.fluid.core as core
if len(feed_target_names) == 0:
return
global_block = inference_program.global_block()
feed_var = global_block.create_var(
name=feed_holder_name,
type=core.VarDesc.VarType.FEED_MINIBATCH,
persistable=True)
for i, name in enumerate(feed_target_names):
if not global_block.has_var(name):
print("The input[{i}]: '{name}' doesn't exist in pruned inference program.".format(i=i, name=name))
continue
out = global_block.var(name)
global_block._prepend_op(
type='feed',
inputs={'X': [feed_var]},
outputs={'Out': [out]},
attrs={'col': i})
def append_fetch_ops(program, fetch_target_names, fetch_holder_name='fetch'):
"""
In this palce, we will add the fetch op
"""
import paddle.fluid.core as core
global_block = program.global_block()
fetch_var = global_block.create_var(
name=fetch_holder_name,
type=core.VarDesc.VarType.FETCH_LIST,
persistable=True)
print("the len of fetch_target_names:%d" % (len(fetch_target_names)))
for i, name in enumerate(fetch_target_names):
global_block.append_op(
type='fetch',
inputs={'X': [name]},
outputs={'Out': [fetch_var]},
attrs={'col': i})
def insert_fetch(program, fetchs, fetch_holder_name="fetch"):
global_block = program.global_block()
need_to_remove_op_index = list()
for i, op in enumerate(global_block.ops):
if op.type == 'fetch':
need_to_remove_op_index.append(i)
for index in need_to_remove_op_index[::-1]:
global_block._remove_op(index)
program.desc.flush()
append_fetch_ops(program, fetchs, fetch_holder_name)
def process_old_ops_desc(program):
for i in range(len(program.blocks[0].ops)):
if program.blocks[0].ops[i].type == "matmul":
if not program.blocks[0].ops[i].has_attr("head_number"):
program.blocks[0].ops[i]._set_attr("head_number", 1)
def infer_shape(program, input_shape_dict):
paddle.enable_static()
OP_WITHOUT_KERNEL_SET = {
"feed",
"fetch",
"recurrent",
"go",
"rnn_memory_helper_grad",
"conditional_block",
"while",
"send",
"recv",
"listen_and_serv",
"fl_listen_and_serv",
"ncclInit",
"select",
"checkpoint_notify",
"gen_bkcl_id",
"c_gen_bkcl_id",
"gen_nccl_id",
"c_gen_nccl_id",
"c_comm_init",
"c_sync_calc_stream",
"c_sync_comm_stream",
"queue_generator",
"dequeue",
"enqueue",
"heter_listen_and_serv",
"c_wait_comm",
"c_wait_compute",
"c_gen_hccl_id",
"c_comm_init_hccl",
"copy_cross_scope",
}
model_version = program.desc._version()
paddle_version = paddle.__version__
major_ver = model_version // 1000000
minor_ver = (model_version - major_ver * 1000000) // 1000
patch_ver = model_version - major_ver * 1000000 - minor_ver * 1000
model_version = "{}.{}.{}".format(major_ver, minor_ver, patch_ver)
if model_version != paddle_version:
print(
"[WARNING] The model is saved by paddlepaddle v{}, but now your paddlepaddle is version of {}, this difference may cause error, it is recommend you reinstall a same version of paddlepaddle for this model".format(
model_version, paddle_version
)
)
for k, v in input_shape_dict.items():
program.blocks[0].var(k).desc.set_shape(v)
for i in range(len(program.blocks)):
for j in range(len(program.blocks[0].ops)):
try:
if program.blocks[i].ops[j].type in OP_WITHOUT_KERNEL_SET:
continue
program.blocks[i].ops[j].desc.infer_shape(program.blocks[i].desc)
except:
pass
if __name__ == '__main__':
import paddle
paddle.enable_static()
paddle.fluid.io.prepend_feed_ops = new_prepend_feed_ops
import paddle.fluid as fluid
print("Start to load paddle model...")
exe = fluid.Executor(fluid.CPUPlace())
[prog, ipts, outs] = fluid.io.load_inference_model("../petrv1", exe,
model_filename="petr_inference.pdmodel",
params_filename="petr_inference.pdiparams")
# 形状推导
process_old_ops_desc(prog)
infer_shape(prog, {"images":[1, 6, 3, 320, 800],"img2lidars":[1, 6, 4, 4]})
feed_vars = [prog.global_block().var(name) for name in ipts]
# 设置额外的形状
node_shape_ext={"nearest_interp_v2_0.tmp_0":[6,256,20,50]}
mod_vars = [[name,prog.global_block().var(name)] for name in node_shape_ext.keys()]
for name,var in mod_vars:
var.desc.set_shape(node_shape_ext[name])
output_names=["conv2d_240.tmp_0"]
insert_fetch(prog, output_names)
# 保存模型
new_outputs = [prog.global_block().var(name) for name in output_names]
fluid.io.save_inference_model("petrv1_backbone", ipts, new_outputs, exe, prog,
model_filename="petr_inference.pdmodel",
params_filename="petr_inference.pdiparams")
2_gen_postproc.py
2_gen_postproc.py
import argparse
import sys
import paddle
import paddle.static as static
import paddle.fluid.core as core
def prepend_feed_ops(program, feed_target_names):
if len(feed_target_names) == 0:
return
global_block = program.global_block()
feed_var = global_block.create_var(
name="feed", type=core.VarDesc.VarType.FEED_MINIBATCH, persistable=True
)
for i, name in enumerate(feed_target_names):
if not global_block.has_var(name):
print(
"The input[{i}]: '{name}' doesn't exist in pruned inference program.".format(
i=i, name=name
)
)
continue
out = global_block.var(name)
global_block._prepend_op(
type="feed",
inputs={"X": [feed_var]},
outputs={"Out": [out]},
attrs={"col": i},
)
def append_fetch_ops(program, fetch_target_names):
"""
In this palce, we will add the fetch op
"""
global_block = program.global_block()
fetch_var = global_block.create_var(
name="fetch", type=core.VarDesc.VarType.FETCH_LIST, persistable=True
)
print("the len of fetch_target_names:%d" % (len(fetch_target_names)))
for i, name in enumerate(fetch_target_names):
global_block.append_op(
type="fetch",
inputs={"X": [name]},
outputs={"Out": [fetch_var]},
attrs={"col": i},
)
def insert_by_op_type(program, op_names, op_type):
global_block = program.global_block()
need_to_remove_op_index = list()
for i, op in enumerate(global_block.ops):
if op.type == op_type:
need_to_remove_op_index.append(i)
for index in need_to_remove_op_index[::-1]:
global_block._remove_op(index)
program.desc.flush()
if op_type == "feed":
prepend_feed_ops(program, op_names)
else:
append_fetch_ops(program, op_names)
if __name__ == '__main__':
import paddle
paddle.enable_static()
print("Start to load paddle model...")
exe = static.Executor(paddle.CPUPlace())
[program, feed_target_names, fetch_targets] = static.io.load_inference_model(
'../petrv1',
exe,
model_filename='petr_inference.pdmodel',
params_filename='petr_inference.pdiparams',
)
input_names=['shape_0.tmp_0','img2lidars','conv2d_240.tmp_0']
insert_by_op_type(program, input_names, "feed")
feed_vars = [program.global_block().var(name) for name in input_names]
fetch_vars = [out_var for out_var in fetch_targets]
static.io.save_inference_model(
path_prefix='petrv1_postproc/inference',
feed_vars=feed_vars,
fetch_vars=fetch_vars,
executor=exe,
program=program,
)
3_run_backbone.cpp
3_run_backbone.cpp
#include <iostream>
#include <fstream>
#include <vector>
#include <string>
#include <string.h>
#include <NvInfer.h>
#include <NvOnnxParser.h>
#include <cuda_runtime_api.h>
#include <cmath>
#include <algorithm>
#include <assert.h>
#include <map>
#include <memory>
#include <numeric>
#include <iterator>
#include <chrono> // 添加chrono库用于计时
#include <cblas.h>
using namespace std;
// TensorRT日志记录器
class Logger : public nvinfer1::ILogger
{
public:
void log(Severity severity, const char* msg) noexcept override
{
if (severity >= Severity::kERROR)
{
// std::cout << msg << std::endl;
}
}
} gLogger;
// IInt8MinMaxCalibrator实现类
class MinMaxCalibrator : public nvinfer1::IInt8MinMaxCalibrator
{
public:
MinMaxCalibrator(const std::string& inputFile, const std::string& inputBlobName, int batchSize)
: mInputFile(inputFile), mInputBlobName(inputBlobName), mBatchSize(batchSize), mCurrentBatch(0)
{
// 读取输入二进制文件
std::ifstream file(inputFile, std::ios::binary);
if (!file)
{
std::cerr << "无法打开输入文件: " << inputFile << std::endl;
exit(EXIT_FAILURE);
}
// 获取文件大小
file.seekg(0, std::ios::end);
mInputSize = file.tellg();
file.seekg(0, std::ios::beg);
// 分配内存并读取数据
mHostData.resize(mInputSize / sizeof(float));
file.read(reinterpret_cast<char*>(mHostData.data()), mInputSize);
// 计算总批次数
const int inputDimsProduct = 6 * 3 * 320 * 800; // 1, 6, 3, 320, 800
mTotalBatches = mHostData.size() / (mBatchSize * inputDimsProduct);
std::cout << "校准器初始化: 共 " << mTotalBatches << " 批, 每批 " << mBatchSize << " 样本" << std::endl;
// 分配设备内存
cudaMalloc(&mDeviceData, mBatchSize * inputDimsProduct * sizeof(float));
}
~MinMaxCalibrator() override
{
cudaFree(mDeviceData);
}
int getBatchSize() const noexcept override
{
return mBatchSize;
}
bool getBatch(void* bindings[], const char* names[], int nbBindings) noexcept override
{
if (mCurrentBatch >= mTotalBatches)
{
return false; // 没有更多批次
}
const int inputDimsProduct = 6 * 3 * 320 * 800; // 1, 6, 3, 320, 800
const size_t batchByteSize = mBatchSize * inputDimsProduct * sizeof(float);
// 复制当前批次数据到设备
cudaMemcpy(mDeviceData,
mHostData.data() + mCurrentBatch * inputDimsProduct,
batchByteSize,
cudaMemcpyHostToDevice);
// 设置绑定
for (int i = 0; i < nbBindings; ++i)
{
if (strcmp(names[i], mInputBlobName.c_str()) == 0)
{
bindings[i] = mDeviceData;
break;
}
}
mCurrentBatch++;
return true;
}
const void* readCalibrationCache(size_t& length) noexcept override
{
mCache.clear();
std::ifstream input("petrv1_calibration.cache", std::ios::binary);
if (input.good())
{
input >> std::noskipws;
std::copy(std::istream_iterator<char>(input), std::istream_iterator<char>(),
std::back_inserter(mCache));
}
length = mCache.size();
return length ? mCache.data() : nullptr;
}
void writeCalibrationCache(const void* cache, size_t length) noexcept override
{
std::ofstream output("petrv1_calibration.cache", std::ios::binary);
output.write(reinterpret_cast<const char*>(cache), length);
}
private:
std::string mInputFile;
std::string mInputBlobName;
int mBatchSize;
int mTotalBatches;
int mCurrentBatch;
size_t mInputSize;
std::vector<float> mHostData;
void* mDeviceData{nullptr};
std::vector<char> mCache;
};
// 计算误差指标
void calculate_errors(const float* infer, const float* ref, size_t n, const char* name) {
// 余弦相似度
float dot = cblas_sdot(n, infer, 1, ref, 1);
float norm_infer = cblas_snrm2(n, infer, 1);
float norm_ref = cblas_snrm2(n, ref, 1);
float cosine_similarity = dot / (norm_infer * norm_ref + 1e-8f);
// 计算绝对误差和相对误差
float max_abs_error = 0.0f;
float max_rel_error = 0.0f;
double mse_sum = 0.0;
for (size_t i = 0; i < n; i++) {
float abs_error = std::abs(infer[i] - ref[i]);
if (abs_error > max_abs_error) {
max_abs_error = abs_error;
}
float rel_error = abs_error / (std::abs(ref[i]) + 1e-8f);
if (rel_error > max_rel_error) {
max_rel_error = rel_error;
}
mse_sum += static_cast<double>(abs_error) * abs_error;
}
float mse = static_cast<float>(mse_sum / n);
std::cout << std::string(50, '=') << std::endl;
std::cout << "张量: " << name << " | 元素个数: " << n << std::endl;
std::cout << "余弦相似度: "<< cosine_similarity << std::endl;
std::cout << "最大绝对误差: " << max_abs_error << std::endl;
std::cout << "最大相对误差: " << max_rel_error << std::endl;
std::cout << "均方误差(MSE): " << mse << std::endl;
}
int main(int argc,char *argv[])
{
// 配置参数
const std::string onnxModelPath = "petrv1_backbone.onnx";
const std::string inputBinPath = "../data/backbone_input.bin";
const std::string outputBinPath = "../data/backbone_output.bin";
const std::string inputBlobName = "images";
const int batchSize = 1;
const std::string enginePath = "petrv1_backbone0.engine"; // 引擎文件路径
nvinfer1::ICudaEngine* engine = nullptr;
nvinfer1::IRuntime* runtime = nullptr;
// 检查引擎文件是否存在,如果存在则加载
std::ifstream engineFile(enginePath, std::ios::binary);
if (engineFile.good()) {
std::cout << "发现已存在的引擎文件,正在加载..." << std::endl;
// 获取文件大小
engineFile.seekg(0, std::ios::end);
size_t engineSize = engineFile.tellg();
engineFile.seekg(0, std::ios::beg);
// 分配内存并读取引擎数据
std::vector<char> engineData(engineSize);
engineFile.read(engineData.data(), engineSize);
// 创建运行时并反序列化引擎
runtime = nvinfer1::createInferRuntime(gLogger);
engine = runtime->deserializeCudaEngine(engineData.data(), engineSize, nullptr);
std::cout << "引擎加载完成" << std::endl;
} else {
// 引擎文件不存在,从ONNX模型构建
std::cout << "未发现引擎文件,开始从ONNX模型构建..." << std::endl;
// 创建构建器和网络
auto builder = nvinfer1::createInferBuilder(gLogger);
const auto explicitBatch = 1U << static_cast<uint32_t>(nvinfer1::NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
auto network = builder->createNetworkV2(explicitBatch);
// 创建ONNX解析器
auto parser = nvonnxparser::createParser(*network, gLogger);
// 解析ONNX模型
if (!parser->parseFromFile(onnxModelPath.c_str(), static_cast<int>(nvinfer1::ILogger::Severity::kINFO))) {
std::cerr << "解析ONNX模型失败" << std::endl;
return EXIT_FAILURE;
}
// 创建构建器配置
auto config = builder->createBuilderConfig();
config->setMemoryPoolLimit(nvinfer1::MemoryPoolType::kWORKSPACE, 1ULL << 30); // 1GB
config->clearFlag(nvinfer1::BuilderFlag::kTF32);
// 配置INT8量化
if(argc>1)
{
printf("Enable INT8 Quantization
");
config->setFlag(nvinfer1::BuilderFlag::kINT8);
config->setFlag(nvinfer1::BuilderFlag::kFP16);
}
auto calibrator = new MinMaxCalibrator(inputBinPath, inputBlobName, batchSize);
config->setInt8Calibrator(calibrator);
// 添加DLA优化配置
if (builder->getNbDLACores() > 0) {
std::cout << "发现可用的DLA核心,将使用DLA进行加速" << std::endl;
config->setFlag(nvinfer1::BuilderFlag::kGPU_FALLBACK); // 无法在DLA上运行的层回退到GPU
} else {
std::cout << "未发现可用的DLA核心,将使用GPU进行推理" << std::endl;
}
// 构建引擎
std::cout << "开始构建..." << std::endl;
engine = builder->buildEngineWithConfig(*network, *config);
std::cout << "引擎构建完成" << std::endl;
// 保存引擎到文件
std::cout << "开始保存引擎到文件..." << std::endl;
auto serializedEngine = engine->serialize();
std::ofstream outputFile(enginePath, std::ios::binary);
outputFile.write((const char*)serializedEngine->data(), serializedEngine->size());
std::cout << "引擎保存完成: " << enginePath << std::endl;
// 释放序列化引擎资源
serializedEngine->destroy();
// 释放资源
delete calibrator;
delete parser;
delete network;
delete config;
delete builder;
if (!engine) {
std::cerr << "构建引擎失败" << std::endl;
return EXIT_FAILURE;
}
}
// 创建执行上下文
auto context = engine->createExecutionContext();
// 读取输入数据
std::ifstream inputFile(inputBinPath, std::ios::binary);
if (!inputFile) {
std::cerr << "无法打开输入文件: " << inputBinPath << std::endl;
return EXIT_FAILURE;
}
// 获取输入大小
inputFile.seekg(0, std::ios::end);
const size_t inputSize = inputFile.tellg();
inputFile.seekg(0, std::ios::beg);
// 分配输入内存
std::vector<float> hostInput(inputSize / sizeof(float));
inputFile.read(reinterpret_cast<char*>(hostInput.data()), inputSize);
// 分配输出内存
const int outputSize = 6 * 256 * 20 * 50; // 6,256,20,50
std::vector<float> hostOutput(outputSize);
// 分配设备内存
void* deviceInput = nullptr;
void* deviceOutput = nullptr;
cudaMalloc(&deviceInput, inputSize);
cudaMalloc(&deviceOutput, outputSize * sizeof(float));
// 复制输入数据到设备
cudaMemcpy(deviceInput, hostInput.data(), inputSize, cudaMemcpyHostToDevice);
// 绑定缓冲区
void* bindings[] = {deviceInput, deviceOutput};
// 执行推理并计时
std::cout << "开始推理..." << std::endl;
for(int iter=0;iter<5;iter++)
{
auto start = std::chrono::high_resolution_clock::now(); // 开始计时
context->executeV2(bindings);
auto end = std::chrono::high_resolution_clock::now(); // 结束计时
std::chrono::duration<double, std::milli> inferenceTime = end - start;
std::cout << "推理完成,耗时: " << inferenceTime.count() << " ms" << std::endl;
}
// 复制输出数据到主机
cudaMemcpy(hostOutput.data(), deviceOutput, outputSize * sizeof(float), cudaMemcpyDeviceToHost);
// 读取参考输出
std::ifstream refFile(outputBinPath, std::ios::binary);
if (!refFile) {
std::cerr << "无法打开参考输出文件: " << outputBinPath << std::endl;
return EXIT_FAILURE;
}
std::vector<float> refOutput(outputSize);
refFile.read(reinterpret_cast<char*>(refOutput.data()), outputSize * sizeof(float));
// 计算MSE
calculate_errors(hostOutput.data(), refOutput.data(), outputSize,"trt_backbone_out");
std::ofstream outputFile("trt_backbone_out.bin", std::ios::binary);
outputFile.write((const char*)hostOutput.data(), outputSize * sizeof(float));
// 释放资源
cudaFree(deviceInput);
cudaFree(deviceOutput);
delete context;
delete engine;
if (runtime) delete runtime; // 如果是加载的引擎,需要释放runtime
return EXIT_SUCCESS;
}
4_run_postproc.py
4_run_postproc.py
import numpy as np
import paddle
import os
import time
def load_binary_data(filename):
with open(filename, 'rb') as f:
# 读取形状信息
shape_size = np.fromfile(f, dtype=np.int32, count=1)[0]
shape = np.fromfile(f, dtype=np.int32, count=shape_size)
# 读取数据
data = np.fromfile(f, dtype=np.float32)
# 重塑数据
data = data.reshape(tuple(shape))
return data
# 计算误差指标
def calculate_errors(infer, ref):
cosine_similarity= np.dot(infer.flatten(),
ref.flatten()) / (np.linalg.norm(infer) * np.linalg.norm(ref) + 1e-8)
mse = np.mean((infer - ref) ** 2)
print("="*50)
print("推理结果验证报告")
print("="*50)
print(f"形状: {infer.shape}")
print(f"余弦相似度: {cosine_similarity:.6f}")
print(f"均方误差(MSE): {mse:.6f}")
print("="*50)
if __name__ == "__main__":
output__generated_var_14=load_binary_data("../data/output__generated_var_14.bin")
print(output__generated_var_14.shape)
output__generated_var_4=load_binary_data("../data/output__generated_var_4.bin")
print(output__generated_var_4.shape)
output__generated_var_9=load_binary_data("../data/output__generated_var_9.bin")
print(output__generated_var_9.shape)
# 1. 启用静态图模式
paddle.enable_static()
with open("../data/input_img2lidars.bin", 'rb') as f:
input_img2lidars = np.fromfile(f, dtype=np.float32).reshape((1,6, 4, 4))
with open('trt_backbone_out.bin', 'rb') as f:
backbone_output = np.fromfile(f, dtype=np.float32).reshape((6, 256, 20, 50))
# 3. 运行postproc模型
print("Start to load paddle model...")
exe = paddle.fluid.Executor(paddle.fluid.CUDAPlace(0))
[prog, ipts, outs] = paddle.fluid.io.load_inference_model(
"petrv1_postproc",
exe,
model_filename="inference.pdmodel",
params_filename="inference.pdiparams"
)
print("输入变量:", [x for x in ipts])
print("输出变量:", [x for x in outs])
for i in range(5):
t0=time.time()
results = exe.run(
program=prog,
feed={
'img2lidars': input_img2lidars,
'conv2d_240.tmp_0': backbone_output,
'shape_0.tmp_0': np.array([1, 6, 3, 320, 800])
},
fetch_list=outs
)
t1=time.time()
print(f"iter:{i} Host-latency(ms):{(t1-t0)*1000:.3f}")
output1 = results[0]
output2 = results[1]
output3 = results[2]
print("Output1 shape:", output1.shape)
print("Output2 shape:", output2.shape)
print("Output3 shape:", output3.shape)
# 4. 计算误差
calculate_errors(output1, output__generated_var_4)
calculate_errors(output2, output__generated_var_9)
calculate_errors(output3, output__generated_var_14)
5_run_e2e.cpp
5_run_e2e.cpp
#include <iterator>
#include <iostream>
#include <fstream>
#include <vector>
#include <string>
#include <string.h>
#include <map>
#include <memory>
#include <algorithm>
#include <cuda_runtime_api.h>
#include <NvInfer.h>
#include <NvOnnxParser.h>
#include <paddle_inference_api.h>
#include <numeric>
#include <iterator>
#include <chrono> // 添加chrono库用于计时
#include <cblas.h>
using namespace std;
// 定义Tensor结构
struct Tensor {
std::vector<int> shape;
std::vector<float> data;
};
// 加载有形状信息的二进制文件
Tensor load_binary_data(const std::string& filename) {
std::ifstream file(filename, std::ios::binary);
if (!file) {
throw std::system_error(errno, std::system_category(), "无法打开文件: " + filename);
}
// 读取形状的维度
int shape_size;
if (!file.read(reinterpret_cast<char*>(&shape_size), sizeof(int))) {
throw std::runtime_error("读取形状大小失败: " + filename);
}
// 读取形状
std::vector<int> shape(shape_size);
if (!file.read(reinterpret_cast<char*>(shape.data()), shape_size * sizeof(int))) {
throw std::runtime_error("读取形状数据失败: " + filename);
}
// 计算数据大小
size_t num_elements = 1;
for (int dim : shape) {
if (dim <= 0) {
throw std::runtime_error("无效的形状维度: " + filename);
}
num_elements *= dim;
}
// 读取数据
std::vector<float> data(num_elements);
if (!file.read(reinterpret_cast<char*>(data.data()), num_elements * sizeof(float))) {
throw std::runtime_error("读取张量数据失败: " + filename);
}
return {shape, data};
}
// 加载原始float数据
std::vector<float> load_raw_float(const std::string& filename, size_t num_elements) {
std::ifstream file(filename, std::ios::binary);
if (!file) {
throw std::system_error(errno, std::system_category(), "无法打开文件: " + filename);
}
std::vector<float> data(num_elements);
if (!file.read(reinterpret_cast<char*>(data.data()), num_elements * sizeof(float))) {
throw std::runtime_error("读取浮点数据失败: " + filename);
}
return data;
}
// 计算误差指标
void calculate_errors(const float* infer, const float* ref, size_t n, const char* name) {
// 余弦相似度
float dot = cblas_sdot(n, infer, 1, ref, 1);
float norm_infer = cblas_snrm2(n, infer, 1);
float norm_ref = cblas_snrm2(n, ref, 1);
float cosine_similarity = dot / (norm_infer * norm_ref + 1e-8f);
// 计算绝对误差和相对误差
float max_abs_error = 0.0f;
float max_rel_error = 0.0f;
double mse_sum = 0.0;
for (size_t i = 0; i < n; i++) {
float abs_error = std::abs(infer[i] - ref[i]);
if (abs_error > max_abs_error) {
max_abs_error = abs_error;
}
float rel_error = abs_error / (std::abs(ref[i]) + 1e-8f);
if (rel_error > max_rel_error) {
max_rel_error = rel_error;
}
mse_sum += static_cast<double>(abs_error) * abs_error;
}
float mse = static_cast<float>(mse_sum / n);
std::cout << std::string(50, '=') << std::endl;
std::cout << "张量: " << name << " | 元素个数: " << n << std::endl;
std::cout << "余弦相似度: "<< cosine_similarity << std::endl;
std::cout << "最大绝对误差: " << max_abs_error << std::endl;
std::cout << "最大相对误差: " << max_rel_error << std::endl;
std::cout << "均方误差(MSE): " << mse << std::endl;
}
// TensorRT日志记录器
class Logger : public nvinfer1::ILogger {
public:
void log(Severity severity, const char* msg) noexcept override {
if (severity >= Severity::kERROR)
{
// std::cout << msg << std::endl;
}
}
} gLogger;
// IInt8MinMaxCalibrator实现类
class MinMaxCalibrator : public nvinfer1::IInt8MinMaxCalibrator {
public:
MinMaxCalibrator(const std::string& inputFile, const std::string& inputBlobName, int batchSize)
: mInputFile(inputFile), mInputBlobName(inputBlobName), mBatchSize(batchSize), mCurrentBatch(0) {
// 读取输入二进制文件
std::ifstream file(inputFile, std::ios::binary);
if (!file) {
std::cerr << "无法打开输入文件: " << inputFile << std::endl;
exit(EXIT_FAILURE);
}
// 获取文件大小
file.seekg(0, std::ios::end);
mInputSize = file.tellg();
file.seekg(0, std::ios::beg);
// 分配内存并读取数据
mHostData.resize(mInputSize / sizeof(float));
file.read(reinterpret_cast<char*>(mHostData.data()), mInputSize);
// 计算总批次数
const int inputDimsProduct = 6 * 3 * 320 * 800; // 6, 3, 320, 800
mTotalBatches = mHostData.size() / (mBatchSize * inputDimsProduct);
std::cout << "校准器初始化: 共 " << mTotalBatches << " 批, 每批 " << mBatchSize << " 样本" << std::endl;
// 分配设备内存
cudaMalloc(&mDeviceData, mBatchSize * inputDimsProduct * sizeof(float));
}
~MinMaxCalibrator() override {
cudaFree(mDeviceData);
}
int getBatchSize() const noexcept override {
return mBatchSize;
}
bool getBatch(void* bindings[], const char* names[], int nbBindings) noexcept override {
if (mCurrentBatch >= mTotalBatches) {
return false; // 没有更多批次
}
const int inputDimsProduct = 6 * 3 * 320 * 800; // 6, 3, 320, 800
const size_t batchByteSize = mBatchSize * inputDimsProduct * sizeof(float);
// 复制当前批次数据到设备
cudaMemcpy(mDeviceData,
mHostData.data() + mCurrentBatch * inputDimsProduct,
batchByteSize,
cudaMemcpyHostToDevice);
// 设置绑定
for (int i = 0; i < nbBindings; ++i) {
if (strcmp(names[i], mInputBlobName.c_str()) == 0) {
bindings[i] = mDeviceData;
break;
}
}
mCurrentBatch++;
return true;
}
const void* readCalibrationCache(size_t& length) noexcept override {
mCache.clear();
std::ifstream input("petrv1_calibration.cache", std::ios::binary);
if (input.good()) {
input >> std::noskipws;
std::copy(std::istream_iterator<char>(input), std::istream_iterator<char>(),
std::back_inserter(mCache));
}
length = mCache.size();
return length ? mCache.data() : nullptr;
}
void writeCalibrationCache(const void* cache, size_t length) noexcept override {
std::ofstream output("petrv1_calibration.cache", std::ios::binary);
output.write(reinterpret_cast<const char*>(cache), length);
}
private:
std::string mInputFile;
std::string mInputBlobName;
int mBatchSize;
int mTotalBatches;
int mCurrentBatch;
size_t mInputSize;
std::vector<float> mHostData;
void* mDeviceData{nullptr};
std::vector<char> mCache;
};
// Paddle Inference的Blob类
template <typename Dtype> class Blob {
public:
Blob(const std::vector<int> &shape) : shape_(shape) {
count_ = 1;
for (int dim : shape_) {
count_ *= dim;
}
data_.reset(new Dtype[count_]());
}
const std::vector<int> &shape() const { return shape_; }
void Reshape(const std::vector<int> &shape) {
shape_ = shape;
count_ = 1;
for (int dim : shape_) {
count_ *= dim;
}
data_.reset(new Dtype[count_]());
}
Dtype *mutable_cpu_data() { return data_.get(); }
const Dtype *cpu_data() const { return data_.get(); }
const Dtype *gpu_data() const { return data_.get(); }
int count() const { return count_; }
private:
std::vector<int> shape_;
int count_;
std::unique_ptr<Dtype[]> data_;
};
template <typename Dtype> using BlobPtr = std::shared_ptr<Blob<Dtype>>;
typedef std::map<std::string, BlobPtr<float>> BlobMap;
// PETRv1推理类
class PETRv1Inference {
public:
PETRv1Inference(const std::string& backbone_onnx_path,
const std::string& backbone_engine_path,
const std::string& postproc_model_dir,
int gpu_id = 0)
: backbone_onnx_path_(backbone_onnx_path),
backbone_engine_path_(backbone_engine_path),
postproc_model_dir_(postproc_model_dir),
gpu_id_(gpu_id) {
// 初始化Backbone
if (!initializeBackbone()) {
throw std::runtime_error("Failed to initialize backbone");
}
// 初始化后处理
if (!initializePostProcess()) {
throw std::runtime_error("Failed to initialize post-process");
}
}
~PETRv1Inference() {
// 释放资源
if (backbone_context_) {
backbone_context_->destroy();
}
if (backbone_engine_) {
backbone_engine_->destroy();
}
if (backbone_runtime_) {
backbone_runtime_->destroy();
}
if (device_input_) {
cudaFree(device_input_);
}
if (device_output_) {
cudaFree(device_output_);
}
}
// 执行完整推理流程
bool runInference(const float* images,
const float* img2lidars,
int warmup_iters = 5,
int perf_iters = 10) {
// 执行Backbone推理
if (!runBackbone(images)) {
return false;
}
// 执行后处理推理
if (!runPostProcess(img2lidars)) {
return false;
}
return true;
}
// 获取后处理输出
const std::map<std::string, std::vector<float>>& getOutputs() const {
return outputs_;
}
private:
// 分配设备内存
const int input_size = 6 * 3 * 320 * 800 * sizeof(float);
const int output_size = 6 * 256 * 20 * 50 * sizeof(float);
std::vector<float> host_output;
std::vector<int> img2lidars_shape = {1, 6, 4, 4};
std::vector<int> backbone_output_shape = {6, 256, 20, 50};
std::vector<int> shape_0_shape = {5};
// 初始化Backbone
bool initializeBackbone() {
host_output.resize(6 * 256 * 20 * 50);
// 检查引擎文件是否存在
std::ifstream engine_file(backbone_engine_path_, std::ios::binary);
if (engine_file.good()) {
std::cout << "发现已存在的引擎文件,正在加载..." << std::endl;
engine_file.seekg(0, std::ios::end);
size_t engine_size = engine_file.tellg();
engine_file.seekg(0, std::ios::beg);
std::vector<char> engine_data(engine_size);
engine_file.read(engine_data.data(), engine_size);
backbone_runtime_ = nvinfer1::createInferRuntime(gLogger);
backbone_engine_ = backbone_runtime_->deserializeCudaEngine(engine_data.data(), engine_size, nullptr);
std::cout << "引擎加载完成" << std::endl;
} else {
// 构建新引擎
std::cout << "未发现引擎文件,开始从ONNX模型构建..." << std::endl;
auto builder = nvinfer1::createInferBuilder(gLogger);
const auto explicit_batch = 1U << static_cast<uint32_t>(nvinfer1::NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
auto network = builder->createNetworkV2(explicit_batch);
auto parser = nvonnxparser::createParser(*network, gLogger);
if (!parser->parseFromFile(backbone_onnx_path_.c_str(), static_cast<int>(nvinfer1::ILogger::Severity::kINFO))) {
std::cerr << "解析ONNX模型失败" << std::endl;
return false;
}
auto config = builder->createBuilderConfig();
config->setMemoryPoolLimit(nvinfer1::MemoryPoolType::kWORKSPACE, 1ULL << 30);
// config->setFlag(nvinfer1::BuilderFlag::kFP16);
// config->setFlag(nvinfer1::BuilderFlag::kINT8);
config->setFlag(nvinfer1::BuilderFlag::kDIRECT_IO);
config->clearFlag(nvinfer1::BuilderFlag::kTF32);
auto calibrator = new MinMaxCalibrator("../data/backbone_input.bin", "images", 1);
config->setInt8Calibrator(calibrator);
if (builder->getNbDLACores() > 0) {
std::cout << "发现可用的DLA核心,将使用DLA进行加速" << std::endl;
config->setFlag(nvinfer1::BuilderFlag::kGPU_FALLBACK);
} else {
std::cout << "未发现可用的DLA核心,将使用GPU进行推理" << std::endl;
}
backbone_engine_ = builder->buildEngineWithConfig(*network, *config);
if (!backbone_engine_) {
std::cerr << "构建引擎失败" << std::endl;
return false;
}
// 保存引擎
auto serialized_engine = backbone_engine_->serialize();
std::ofstream output_file(backbone_engine_path_, std::ios::binary);
output_file.write((const char*)serialized_engine->data(), serialized_engine->size());
serialized_engine->destroy();
// 清理资源
delete calibrator;
delete parser;
delete network;
delete config;
delete builder;
}
// 创建执行上下文
backbone_context_ = backbone_engine_->createExecutionContext();
cudaMalloc(&device_input_, input_size);
cudaMalloc(&device_output_, output_size);
return true;
}
// 初始化后处理
bool initializePostProcess() {
paddle_infer::Config config;
std::string model_file = postproc_model_dir_ + "/inference.pdmodel";
std::string params_file = postproc_model_dir_ + "/inference.pdiparams";
config.SetModel(model_file, params_file);
if (gpu_id_ >= 0) {
config.EnableUseGpu(1000, gpu_id_);
//config.EnableUseGpu(512, 0,paddle::AnalysisConfig::Precision::kHalf);
}
config.EnableMemoryOptim();
config.SwitchIrOptim(true);
config.EnableCUDNN();
config.Exp_DisableMixedPrecisionOps({"fill_any_like","matmul_v2","softmax","layer_norm","sigmoid",
"log","clip","elementwise_div","conv2d","elementwise_pow","elementwise_floordiv","nearest_interp_v2",
"elementwise_add","bitwise_not","elementwise_sub","bitwise_and","exp","top_k_v2","isnan_v2","fill_constant"});
config.DisableGlogInfo();
postproc_predictor_ = paddle_infer::CreatePredictor(config);
if (!postproc_predictor_) {
std::cerr << "创建Paddle Predictor失败" << std::endl;
return false;
}
auto input_handle3 = postproc_predictor_->GetInputHandle("shape_0.tmp_0");
input_handle3->Reshape(shape_0_shape);
int shape_data[5] = {1, 6, 3, 320, 800};
input_handle3->CopyFromCpu(shape_data);
return true;
}
// 执行Backbone推理
bool runBackbone(const float *images) {
cudaMemcpy(device_input_, images, input_size, cudaMemcpyHostToDevice);
void* bindings[] = {device_input_, device_output_};
auto start = std::chrono::high_resolution_clock::now();
backbone_context_->executeV2(bindings);
auto end = std::chrono::high_resolution_clock::now();
std::chrono::duration<double, std::milli> duration = end - start;
std::cout << " TrtInfer耗时: " << duration.count() << " ms" << std::endl;
cudaMemcpy(host_output.data(), device_output_, host_output.size() * sizeof(float), cudaMemcpyDeviceToHost);
return true;
}
// 执行后处理推理
bool runPostProcess(const float * img2lidars) {
auto input_handle1 = postproc_predictor_->GetInputHandle("conv2d_240.tmp_0");
input_handle1->Reshape(backbone_output_shape);
input_handle1->CopyFromCpu(host_output.data());
auto input_handle2 = postproc_predictor_->GetInputHandle("img2lidars");
input_handle2->Reshape(img2lidars_shape);
input_handle2->CopyFromCpu(img2lidars);
auto start = std::chrono::high_resolution_clock::now();
if (!postproc_predictor_->Run()) {
std::cerr << "后处理推理失败" << std::endl;
return false;
}
auto end = std::chrono::high_resolution_clock::now();
std::chrono::duration<double, std::milli> duration = end - start;
std::cout << " PaddleInfer耗时: " << duration.count() << " ms" << std::endl;
// 获取输出
outputs_.clear();
auto output_names = postproc_predictor_->GetOutputNames();
for (const auto& name : output_names) {
auto output_handle = postproc_predictor_->GetOutputHandle(name);
std::vector<int> shape = output_handle->shape();
int size = std::accumulate(shape.begin(), shape.end(), 1, std::multiplies<int>());
std::vector<float> output_data(size);
if (output_handle->type() == paddle_infer::INT64) {
assert(1 == shape.size());
std::vector<int64_t> label_i(shape.at(0));
output_handle->CopyToCpu(label_i.data());
std::vector<float> label_f(label_i.data(),
label_i.data() + shape.at(0));
memcpy(output_data.data(), label_f.data(),
shape.at(0) * sizeof(float));
} else {
output_handle->CopyToCpu(output_data.data());
}
outputs_[name] = output_data;
}
return true;
}
// 成员变量
std::string backbone_onnx_path_;
std::string backbone_engine_path_;
std::string postproc_model_dir_;
int gpu_id_;
nvinfer1::ICudaEngine* backbone_engine_ = nullptr;
nvinfer1::IRuntime* backbone_runtime_ = nullptr;
nvinfer1::IExecutionContext* backbone_context_ = nullptr;
void* device_input_ = nullptr;
void* device_output_ = nullptr;
std::shared_ptr<paddle_infer::Predictor> postproc_predictor_;
std::map<std::string, std::vector<float>> outputs_;
};
int main() {
try {
// 初始化推理器
PETRv1Inference infer(
"petrv1_backbone.onnx",
"petrv1_backbone.engine",
"petrv1_postproc",
0 // GPU ID
);
// 读取输入数据
std::ifstream images_file("../data/backbone_input.bin", std::ios::binary);
if (!images_file) {
std::cerr << "无法打开输入文件: " << "backbone_input.bin" << std::endl;
return false;
}
images_file.seekg(0, std::ios::end);
size_t input_size = images_file.tellg();
images_file.seekg(0, std::ios::beg);
std::vector<float> images(input_size / sizeof(float));
images_file.read(reinterpret_cast<char*>(images.data()), input_size);
std::ifstream img2lidars_file("../data/input_img2lidars.bin", std::ios::binary);
if (!img2lidars_file) {
std::cerr << "无法打开输入文件: " << "input_img2lidars.bin" << std::endl;
return false;
}
img2lidars_file.seekg(0, std::ios::end);
input_size = img2lidars_file.tellg();
img2lidars_file.seekg(0, std::ios::beg);
std::vector<float> img2lidars(input_size / sizeof(float));
img2lidars_file.read(reinterpret_cast<char*>(img2lidars.data()), input_size);
for(int i=0;i<5;i++)
{
auto start = std::chrono::high_resolution_clock::now();
infer.runInference(images.data(),img2lidars.data());
auto end = std::chrono::high_resolution_clock::now();
std::chrono::duration<double, std::milli> duration = end - start;
std::cout << i << " 推理耗时: " << duration.count() << " ms" << std::endl;
}
// 加载参考数据
Tensor output_var_4 = load_binary_data("../data/output__generated_var_4.bin");
Tensor output_var_9 = load_binary_data("../data/output__generated_var_9.bin");
Tensor output_var_14 = load_binary_data("../data/output__generated_var_14.bin");
// 获取输出
auto outputs = infer.getOutputs();
for (const auto& [name, data] : outputs) {
std::cout << "输出: " << name << ", 大小: " << data.size() << std::endl;
}
calculate_errors(outputs["save_infer_model/scale_0.tmp_0"].data(),
output_var_4.data.data(), output_var_4.data.size(), "boxes3d");
calculate_errors(outputs["save_infer_model/scale_1.tmp_0"].data(),
output_var_9.data.data(), output_var_9.data.size(), "scores");
calculate_errors(outputs["save_infer_model/scale_2.tmp_0"].data(),
output_var_14.data.data(), output_var_14.data.size(), "labels");
} catch (const std::exception& e) {
std::cerr << "推理错误: " << e.what() << std::endl;
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}
6_run_e2e_acc.cpp
6_run_e2e_acc.cpp
#include <iterator>
#include <iostream>
#include <fstream>
#include <vector>
#include <string>
#include <string.h>
#include <map>
#include <memory>
#include <algorithm>
#include <cuda_runtime_api.h>
#include <NvInfer.h>
#include <NvOnnxParser.h>
#include <paddle_inference_api.h>
#include <numeric>
#include <iterator>
#include <chrono> // 添加chrono库用于计时
#include <cblas.h>
#include <fstream>
#include <vector>
#include <string>
#include <iostream>
#include <filesystem>
namespace fs = std::filesystem;
using namespace std;
// 定义Tensor结构
struct Tensor {
std::vector<int> shape;
std::vector<float> data;
};
// 加载有形状信息的二进制文件
Tensor load_binary_data(const std::string& filename) {
std::ifstream file(filename, std::ios::binary);
if (!file) {
throw std::system_error(errno, std::system_category(), "无法打开文件: " + filename);
}
// 读取形状的维度
int shape_size;
if (!file.read(reinterpret_cast<char*>(&shape_size), sizeof(int))) {
throw std::runtime_error("读取形状大小失败: " + filename);
}
// 读取形状
std::vector<int> shape(shape_size);
if (!file.read(reinterpret_cast<char*>(shape.data()), shape_size * sizeof(int))) {
throw std::runtime_error("读取形状数据失败: " + filename);
}
// 计算数据大小
size_t num_elements = 1;
for (int dim : shape) {
if (dim <= 0) {
throw std::runtime_error("无效的形状维度: " + filename);
}
num_elements *= dim;
}
// 读取数据
std::vector<float> data(num_elements);
if (!file.read(reinterpret_cast<char*>(data.data()), num_elements * sizeof(float))) {
throw std::runtime_error("读取张量数据失败: " + filename);
}
return {shape, data};
}
// 加载原始float数据
std::vector<float> load_raw_float(const std::string& filename, size_t num_elements) {
std::ifstream file(filename, std::ios::binary);
if (!file) {
throw std::system_error(errno, std::system_category(), "无法打开文件: " + filename);
}
std::vector<float> data(num_elements);
if (!file.read(reinterpret_cast<char*>(data.data()), num_elements * sizeof(float))) {
throw std::runtime_error("读取浮点数据失败: " + filename);
}
return data;
}
// 计算误差指标
void calculate_errors(const float* infer, const float* ref, size_t n, const char* name) {
// 余弦相似度
float dot = cblas_sdot(n, infer, 1, ref, 1);
float norm_infer = cblas_snrm2(n, infer, 1);
float norm_ref = cblas_snrm2(n, ref, 1);
float cosine_similarity = dot / (norm_infer * norm_ref + 1e-8f);
// 计算绝对误差和相对误差
float max_abs_error = 0.0f;
float max_rel_error = 0.0f;
double mse_sum = 0.0;
for (size_t i = 0; i < n; i++) {
float abs_error = std::abs(infer[i] - ref[i]);
if (abs_error > max_abs_error) {
max_abs_error = abs_error;
}
float rel_error = abs_error / (std::abs(ref[i]) + 1e-8f);
if (rel_error > max_rel_error) {
max_rel_error = rel_error;
}
mse_sum += static_cast<double>(abs_error) * abs_error;
}
float mse = static_cast<float>(mse_sum / n);
std::cout << std::string(50, '=') << std::endl;
std::cout << "张量: " << name << " | 元素个数: " << n << std::endl;
std::cout << "余弦相似度: "<< cosine_similarity << std::endl;
std::cout << "最大绝对误差: " << max_abs_error << std::endl;
std::cout << "最大相对误差: " << max_rel_error << std::endl;
std::cout << "均方误差(MSE): " << mse << std::endl;
}
// TensorRT日志记录器
class Logger : public nvinfer1::ILogger {
public:
void log(Severity severity, const char* msg) noexcept override {
if (severity >= Severity::kERROR)
{
// std::cout << msg << std::endl;
}
}
} gLogger;
// IInt8MinMaxCalibrator实现类
class MinMaxCalibrator : public nvinfer1::IInt8MinMaxCalibrator {
public:
MinMaxCalibrator(const std::string& inputFile, const std::string& inputBlobName, int batchSize)
: mInputFile(inputFile), mInputBlobName(inputBlobName), mBatchSize(batchSize), mCurrentBatch(0) {
// 读取输入二进制文件
std::ifstream file(inputFile, std::ios::binary);
if (!file) {
std::cerr << "无法打开输入文件: " << inputFile << std::endl;
exit(EXIT_FAILURE);
}
// 获取文件大小
file.seekg(0, std::ios::end);
mInputSize = file.tellg();
file.seekg(0, std::ios::beg);
// 分配内存并读取数据
mHostData.resize(mInputSize / sizeof(float));
file.read(reinterpret_cast<char*>(mHostData.data()), mInputSize);
// 计算总批次数
const int inputDimsProduct = 6 * 3 * 320 * 800; // 6, 3, 320, 800
mTotalBatches = mHostData.size() / (mBatchSize * inputDimsProduct);
std::cout << "校准器初始化: 共 " << mTotalBatches << " 批, 每批 " << mBatchSize << " 样本" << std::endl;
// 分配设备内存
cudaMalloc(&mDeviceData, mBatchSize * inputDimsProduct * sizeof(float));
}
~MinMaxCalibrator() override {
cudaFree(mDeviceData);
}
int getBatchSize() const noexcept override {
return mBatchSize;
}
bool getBatch(void* bindings[], const char* names[], int nbBindings) noexcept override {
if (mCurrentBatch >= mTotalBatches) {
return false; // 没有更多批次
}
const int inputDimsProduct = 6 * 3 * 320 * 800; // 6, 3, 320, 800
const size_t batchByteSize = mBatchSize * inputDimsProduct * sizeof(float);
// 复制当前批次数据到设备
cudaMemcpy(mDeviceData,
mHostData.data() + mCurrentBatch * inputDimsProduct,
batchByteSize,
cudaMemcpyHostToDevice);
// 设置绑定
for (int i = 0; i < nbBindings; ++i) {
if (strcmp(names[i], mInputBlobName.c_str()) == 0) {
bindings[i] = mDeviceData;
break;
}
}
mCurrentBatch++;
return true;
}
const void* readCalibrationCache(size_t& length) noexcept override {
mCache.clear();
std::ifstream input("petrv1_calibration.cache", std::ios::binary);
if (input.good()) {
input >> std::noskipws;
std::copy(std::istream_iterator<char>(input), std::istream_iterator<char>(),
std::back_inserter(mCache));
}
length = mCache.size();
return length ? mCache.data() : nullptr;
}
void writeCalibrationCache(const void* cache, size_t length) noexcept override {
std::ofstream output("petrv1_calibration.cache", std::ios::binary);
output.write(reinterpret_cast<const char*>(cache), length);
}
private:
std::string mInputFile;
std::string mInputBlobName;
int mBatchSize;
int mTotalBatches;
int mCurrentBatch;
size_t mInputSize;
std::vector<float> mHostData;
void* mDeviceData{nullptr};
std::vector<char> mCache;
};
// Paddle Inference的Blob类
template <typename Dtype> class Blob {
public:
Blob(const std::vector<int> &shape) : shape_(shape) {
count_ = 1;
for (int dim : shape_) {
count_ *= dim;
}
data_.reset(new Dtype[count_]());
}
const std::vector<int> &shape() const { return shape_; }
void Reshape(const std::vector<int> &shape) {
shape_ = shape;
count_ = 1;
for (int dim : shape_) {
count_ *= dim;
}
data_.reset(new Dtype[count_]());
}
Dtype *mutable_cpu_data() { return data_.get(); }
const Dtype *cpu_data() const { return data_.get(); }
const Dtype *gpu_data() const { return data_.get(); }
int count() const { return count_; }
private:
std::vector<int> shape_;
int count_;
std::unique_ptr<Dtype[]> data_;
};
template <typename Dtype> using BlobPtr = std::shared_ptr<Blob<Dtype>>;
typedef std::map<std::string, BlobPtr<float>> BlobMap;
// PETRv1推理类
class PETRv1Inference {
public:
PETRv1Inference(const std::string& backbone_onnx_path,
const std::string& backbone_engine_path,
const std::string& postproc_model_dir,
int gpu_id = 0)
: backbone_onnx_path_(backbone_onnx_path),
backbone_engine_path_(backbone_engine_path),
postproc_model_dir_(postproc_model_dir),
gpu_id_(gpu_id) {
// 初始化Backbone
if (!initializeBackbone()) {
throw std::runtime_error("Failed to initialize backbone");
}
// 初始化后处理
if (!initializePostProcess()) {
throw std::runtime_error("Failed to initialize post-process");
}
}
~PETRv1Inference() {
// 释放资源
if (backbone_context_) {
backbone_context_->destroy();
}
if (backbone_engine_) {
backbone_engine_->destroy();
}
if (backbone_runtime_) {
backbone_runtime_->destroy();
}
if (device_input_) {
cudaFree(device_input_);
}
if (device_output_) {
cudaFree(device_output_);
}
}
// 执行完整推理流程
bool runInference(const float* images,
const float* img2lidars,
int warmup_iters = 5,
int perf_iters = 10) {
// 执行Backbone推理
if (!runBackbone(images)) {
return false;
}
// 执行后处理推理
if (!runPostProcess(img2lidars)) {
return false;
}
return true;
}
// 获取后处理输出
const std::map<std::string, std::vector<float>>& getOutputsFloat() const {
return outputs_float_;
}
const std::map<std::string, std::vector<int64_t>>& getOutputsInt64() const {
return outputs_int64_;
}
private:
// 分配设备内存
const int input_size = 6 * 3 * 320 * 800 * sizeof(float);
const int output_size = 6 * 256 * 20 * 50 * sizeof(float);
std::vector<float> host_output;
std::vector<int> img2lidars_shape = {1, 6, 4, 4};
std::vector<int> backbone_output_shape = {6, 256, 20, 50};
std::vector<int> shape_0_shape = {5};
// 初始化Backbone
bool initializeBackbone() {
host_output.resize(6 * 256 * 20 * 50);
// 检查引擎文件是否存在
std::ifstream engine_file(backbone_engine_path_, std::ios::binary);
if (engine_file.good()) {
std::cout << "发现已存在的引擎文件,正在加载..." << std::endl;
engine_file.seekg(0, std::ios::end);
size_t engine_size = engine_file.tellg();
engine_file.seekg(0, std::ios::beg);
std::vector<char> engine_data(engine_size);
engine_file.read(engine_data.data(), engine_size);
backbone_runtime_ = nvinfer1::createInferRuntime(gLogger);
backbone_engine_ = backbone_runtime_->deserializeCudaEngine(engine_data.data(), engine_size, nullptr);
std::cout << "引擎加载完成" << std::endl;
} else {
// 构建新引擎
std::cout << "未发现引擎文件,开始从ONNX模型构建..." << std::endl;
auto builder = nvinfer1::createInferBuilder(gLogger);
const auto explicit_batch = 1U << static_cast<uint32_t>(nvinfer1::NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
auto network = builder->createNetworkV2(explicit_batch);
auto parser = nvonnxparser::createParser(*network, gLogger);
if (!parser->parseFromFile(backbone_onnx_path_.c_str(),
static_cast<int>(nvinfer1::ILogger::Severity::kINFO))) {
std::cerr << "解析ONNX模型失败" << std::endl;
return false;
}
auto config = builder->createBuilderConfig();
config->setMemoryPoolLimit(nvinfer1::MemoryPoolType::kWORKSPACE, 1ULL << 30);
config->setFlag(nvinfer1::BuilderFlag::kFP16);
config->setFlag(nvinfer1::BuilderFlag::kINT8);
config->setFlag(nvinfer1::BuilderFlag::kDIRECT_IO);
config->clearFlag(nvinfer1::BuilderFlag::kTF32);
auto calibrator = new MinMaxCalibrator("../Paddle3D/model_iodata/0_img.bin", "images", 1);
config->setInt8Calibrator(calibrator);
if (builder->getNbDLACores() > 0) {
std::cout << "发现可用的DLA核心,将使用DLA进行加速" << std::endl;
config->setFlag(nvinfer1::BuilderFlag::kGPU_FALLBACK);
} else {
std::cout << "未发现可用的DLA核心,将使用GPU进行推理" << std::endl;
}
backbone_engine_ = builder->buildEngineWithConfig(*network, *config);
if (!backbone_engine_) {
std::cerr << "构建引擎失败" << std::endl;
return false;
}
// 保存引擎
auto serialized_engine = backbone_engine_->serialize();
std::ofstream output_file(backbone_engine_path_, std::ios::binary);
output_file.write((const char*)serialized_engine->data(), serialized_engine->size());
serialized_engine->destroy();
// 清理资源
delete calibrator;
delete parser;
delete network;
delete config;
delete builder;
}
// 创建执行上下文
backbone_context_ = backbone_engine_->createExecutionContext();
cudaMalloc(&device_input_, input_size);
cudaMalloc(&device_output_, output_size);
return true;
}
// 初始化后处理
bool initializePostProcess() {
paddle_infer::Config config;
std::string model_file = postproc_model_dir_ + "/inference.pdmodel";
std::string params_file = postproc_model_dir_ + "/inference.pdiparams";
config.SetModel(model_file, params_file);
if (gpu_id_ >= 0) {
//config.EnableUseGpu(1000, gpu_id_);
config.EnableUseGpu(512, 0,paddle::AnalysisConfig::Precision::kHalf);
}
config.EnableMemoryOptim();
config.SwitchIrOptim(true);
config.EnableCUDNN();
config.Exp_DisableMixedPrecisionOps({"fill_any_like","softmax","layer_norm","sigmoid", "log","clip","elementwise_div","elementwise_pow",
"elementwise_floordiv","nearest_interp_v2",
"elementwise_add","bitwise_not","elementwise_sub","bitwise_and",
"exp","top_k_v2","isnan_v2","fill_constant"});
config.DisableGlogInfo();
postproc_predictor_ = paddle_infer::CreatePredictor(config);
if (!postproc_predictor_) {
std::cerr << "创建Paddle Predictor失败" << std::endl;
return false;
}
auto input_handle3 = postproc_predictor_->GetInputHandle("shape_0.tmp_0");
input_handle3->Reshape(shape_0_shape);
int shape_data[5] = {1, 6, 3, 320, 800};
input_handle3->CopyFromCpu(shape_data);
return true;
}
// 执行Backbone推理
bool runBackbone(const float *images) {
cudaMemcpy(device_input_, images, input_size, cudaMemcpyHostToDevice);
void* bindings[] = {device_input_, device_output_};
auto start = std::chrono::high_resolution_clock::now();
backbone_context_->executeV2(bindings);
auto end = std::chrono::high_resolution_clock::now();
std::chrono::duration<double, std::milli> duration = end - start;
std::cout << " TrtInfer耗时: " << duration.count() << " ms;";
cudaMemcpy(host_output.data(), device_output_, host_output.size() * sizeof(float), cudaMemcpyDeviceToHost);
return true;
}
// 执行后处理推理
bool runPostProcess(const float * img2lidars) {
auto input_handle1 = postproc_predictor_->GetInputHandle("conv2d_240.tmp_0");
input_handle1->Reshape(backbone_output_shape);
input_handle1->CopyFromCpu(host_output.data());
auto input_handle2 = postproc_predictor_->GetInputHandle("img2lidars");
input_handle2->Reshape(img2lidars_shape);
input_handle2->CopyFromCpu(img2lidars);
auto start = std::chrono::high_resolution_clock::now();
if (!postproc_predictor_->Run()) {
std::cerr << "后处理推理失败" << std::endl;
return false;
}
auto end = std::chrono::high_resolution_clock::now();
std::chrono::duration<double, std::milli> duration = end - start;
std::cout << " PaddleInfer耗时: " << duration.count() << " ms;" ;
// 获取输出
outputs_float_.clear();
outputs_int64_.clear();
auto output_names = postproc_predictor_->GetOutputNames();
for (const auto& name : output_names) {
auto output_handle = postproc_predictor_->GetOutputHandle(name);
std::vector<int> shape = output_handle->shape();
int size = std::accumulate(shape.begin(), shape.end(), 1, std::multiplies<int>());
std::vector<float> output_data(size);
if (output_handle->type() == paddle_infer::INT64) {
assert(1 == shape.size());
std::vector<int64_t> label_i(shape.at(0));
output_handle->CopyToCpu(label_i.data());
outputs_int64_[name] = label_i;
} else {
output_handle->CopyToCpu(output_data.data());
outputs_float_[name] = output_data;
}
}
return true;
}
// 成员变量
std::string backbone_onnx_path_;
std::string backbone_engine_path_;
std::string postproc_model_dir_;
int gpu_id_;
nvinfer1::ICudaEngine* backbone_engine_ = nullptr;
nvinfer1::IRuntime* backbone_runtime_ = nullptr;
nvinfer1::IExecutionContext* backbone_context_ = nullptr;
void* device_input_ = nullptr;
void* device_output_ = nullptr;
std::shared_ptr<paddle_infer::Predictor> postproc_predictor_;
std::map<std::string, std::vector<float>> outputs_float_;
std::map<std::string, std::vector<int64_t>> outputs_int64_;
};
int main() {
try {
// 初始化推理器
PETRv1Inference infer(
"petrv1_backbone.onnx",
"petrv1_backbone.engine",
"petrv1_postproc",
0 // GPU ID
);
int idx = 0;
while (true)
{ // 构建文件路径
std::string img_path = "../Paddle3D/model_iodata/" + std::to_string(idx) + "_img.bin";
std::string img2lidars_path = "../Paddle3D/model_iodata/" + std::to_string(idx) + "_img2lidars.bin";
std::string bboxes_path = "../Paddle3D/model_iodata/" + std::to_string(idx) + "_bboxes.bin";
std::string scores_path = "../Paddle3D/model_iodata/" + std::to_string(idx) + "_scores.bin";
std::string labels_path = "../Paddle3D/model_iodata/" + std::to_string(idx) + "_labels.bin";
// 检查文件是否存在
if (!fs::exists(img_path)) {
break;
}
// 读取图像数据
std::ifstream img_file(img_path, std::ios::binary);
if (!img_file) {
std::cerr << "无法打开文件: " << img_path << std::endl;
break;
}
// 获取文件大小并读取数据
img_file.seekg(0, std::ios::end);
size_t size = img_file.tellg();
img_file.seekg(0, std::ios::beg);
std::vector<float> input_images_data(size / sizeof(float));
img_file.read(reinterpret_cast<char*>(input_images_data.data()), size);
// 重塑为 [1, 6, 3, 320, 800]
// 注意:C++中没有内置的多维数组重塑功能,需要手动计算索引
const int dims[] = {1, 6, 3, 320, 800};
const int total_elements = dims[0] * dims[1] * dims[2] * dims[3] * dims[4];
if (input_images_data.size() != total_elements) {
std::cerr << "文件大小与预期形状不匹配: " << img_path << std::endl;
break;
}
// 读取img2lidars数据
std::ifstream lidars_file(img2lidars_path, std::ios::binary);
if (!lidars_file) {
std::cerr << "无法打开文件: " << img2lidars_path << std::endl;
break;
}
lidars_file.seekg(0, std::ios::end);
size = lidars_file.tellg();
lidars_file.seekg(0, std::ios::beg);
std::vector<float> input_img2lidars_data(size / sizeof(float));
lidars_file.read(reinterpret_cast<char*>(input_img2lidars_data.data()), size);
// 重塑为 [1, 6, 4, 4]
const int lidars_dims[] = {1, 6, 4, 4};
const int lidars_total_elements = lidars_dims[0] * lidars_dims[1] * lidars_dims[2] * lidars_dims[3];
if (input_img2lidars_data.size() != lidars_total_elements) {
std::cerr << "文件大小与预期形状不匹配: " << img2lidars_path << std::endl;
break;
}
std::cout << idx << " ";
auto start = std::chrono::high_resolution_clock::now();
infer.runInference(input_images_data.data(),input_img2lidars_data.data());
auto end = std::chrono::high_resolution_clock::now();
std::chrono::duration<double, std::milli> duration = end - start;
std::cout << " E2E耗时: " << duration.count() << " ms;" << std::endl;
// 获取输出
auto Floatoutputs = infer.getOutputsFloat();
auto Int64outputs = infer.getOutputsInt64();
auto bboxes = Floatoutputs["save_infer_model/scale_0.tmp_0"];
auto scores = Floatoutputs["save_infer_model/scale_1.tmp_0"];
auto labels = Int64outputs["save_infer_model/scale_2.tmp_0"];
{
std::ofstream outputFile1(bboxes_path, std::ios::binary);
outputFile1.write(reinterpret_cast<char*>(bboxes.data()), 300*9 * sizeof(float));
}
{
std::ofstream outputFile1(scores_path, std::ios::binary);
outputFile1.write(reinterpret_cast<char*>(scores.data()), 300 * sizeof(float));
}
{
std::ofstream outputFile1(labels_path, std::ios::binary);
outputFile1.write(reinterpret_cast<char*>(labels.data()), 300 * sizeof(int64_t));
}
idx++;
}
} catch (const std::exception& e) {
std::cerr << "推理错误: " << e.what() << std::endl;
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}
pred_eval.py
pred_eval.py
import argparse
import os
import random
import numpy as np
import paddle
from paddle3d.apis.config import Config
from paddle3d.apis.trainer import Trainer
from paddle3d.slim import get_qat_config
from paddle3d.utils.logger import logger
from paddle3d.sample import Sample, SampleMeta
from paddle3d.geometries import BBoxes3D
def bbox3d2result(bboxes, scores, labels, attrs=None):
"""Convert detection results to a list of numpy arrays.
"""
result_dict = dict(
boxes_3d=bboxes, scores_3d=scores, labels_3d=labels)
if attrs is not None:
result_dict['attrs_3d'] = attrs
return result_dict
class CustomTrainer(Trainer):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def _parse_results_to_sample(self, results: dict, sample: dict):
num_samples = len(results)
new_results = []
for i in range(num_samples):
data = Sample(None, sample["modality"][i])
bboxes_3d = results[i]['pts_bbox']["boxes_3d"]
labels = results[i]['pts_bbox']["labels_3d"]
confidences = results[i]['pts_bbox']["scores_3d"]
bottom_center = bboxes_3d[:, :3]
gravity_center = np.zeros_like(bottom_center)
gravity_center[:, :2] = bottom_center[:, :2]
gravity_center[:, 2] = bottom_center[:, 2] + bboxes_3d[:, 5] * 0.5
bboxes_3d[:, :3] = gravity_center
data.bboxes_3d = BBoxes3D(bboxes_3d[:, 0:7])
data.bboxes_3d.coordmode = 'Lidar'
data.bboxes_3d.origin = [0.5, 0.5, 0.5]
data.bboxes_3d.rot_axis = 2
data.bboxes_3d.velocities = bboxes_3d[:, 7:9]
data['bboxes_3d_numpy'] = bboxes_3d[:, 0:7]
data['bboxes_3d_coordmode'] = 'Lidar'
data['bboxes_3d_origin'] = [0.5, 0.5, 0.5]
data['bboxes_3d_rot_axis'] = 2
data['bboxes_3d_velocities'] = bboxes_3d[:, 7:9]
data.labels = labels
data.confidences = confidences
data.meta = SampleMeta(id=sample["meta"][i]['id'])
if "calibs" in sample:
calib = [calibs.numpy()[i] for calibs in sample["calibs"]]
data.calibs = calib
new_results.append(data)
return new_results
def simple_test_pts(self,idx):
with open(f'model_iodata/{idx}_bboxes.bin', 'rb') as f:
bboxes = np.frombuffer(f.read(), dtype=np.float32).reshape((300,9)).copy()
with open(f'model_iodata/{idx}_scores.bin', 'rb') as f:
scores = np.frombuffer(f.read(), dtype=np.float32).reshape((300,)).copy()
with open(f'model_iodata/{idx}_labels.bin', 'rb') as f:
labels = np.frombuffer(f.read(), dtype=np.int64).reshape((300,)).copy()
bbox_results = [bbox3d2result(bboxes, scores, labels)]
return bbox_results
def evaluate(self):
msg = 'evaluate on validate dataset'
metric_obj = self.val_dataset.metric
for idx, sample in self.logger.enumerate(self.eval_dataloader, msg=msg):
img_metas = sample['meta']
bbox_list = [dict() for i in range(len(img_metas))]
bbox_pts = self.simple_test_pts(idx)
for result_dict, pts_bbox in zip(bbox_list, bbox_pts):
result_dict['pts_bbox'] = pts_bbox
results=bbox_list
preds=self._parse_results_to_sample(bbox_list,sample)
metric_obj.update(predictions=preds, ground_truths=sample)
metrics = metric_obj.compute(verbose=True)
return metrics
batch_size=1
cfg = Config(path='configs/petr/petr_vovnet_gridmask_p4_800x320.yml', batch_size=batch_size)
dic = cfg.to_dict()
batch_size = dic.pop('batch_size')
dic.update({'dataloader_fn': {
'batch_size': batch_size,
'num_workers': 1}})
dic['checkpoint'] = None
dic['resume'] = False
trainer = CustomTrainer(**dic)
trainer.evaluate()
my_infer.py
my_infer.py
import numpy as np
import paddle
import os
import sys
import time
import paddle.inference as paddle_infer
import glob
import tqdm
def main():
config = paddle_infer.Config("/home/petrv1/petr_inference.pdmodel",
"/home/petrv1/petr_inference.pdiparams")
config.enable_use_gpu(256, 0)
config.disable_glog_info()
predictor = paddle_infer.create_predictor(config)
input_names = predictor.get_input_names()
output_names = predictor.get_output_names()
print("input_names:",input_names)
print("output_names:",output_names)
idx=0
while True:
img_path=f'model_iodata/{idx}_img.bin'
if not os.path.exists(img_path):
break
with open(img_path, 'rb') as f:
input_images = np.frombuffer(f.read(), dtype=np.float32).reshape((1,6, 3, 320, 800))
img2lidars_path=f'model_iodata/{idx}_img2lidars.bin'
with open(img2lidars_path, 'rb') as f:
input_img2lidars = np.frombuffer(f.read(), dtype=np.float32).reshape((1,6,4,4))
predictor.get_input_handle(input_names[0]).copy_from_cpu(input_images)
predictor.get_input_handle(input_names[1]).copy_from_cpu(input_img2lidars)
predictor.run()
output0_tensor = predictor.get_output_handle(output_names[0])
output1_tensor = predictor.get_output_handle(output_names[1])
output2_tensor = predictor.get_output_handle(output_names[2])
bboxes = output0_tensor.copy_to_cpu()
scores = output1_tensor.copy_to_cpu()
labels = output2_tensor.copy_to_cpu()
with open(f'model_iodata/{idx}_bboxes.bin', 'wb') as f:
f.write(bboxes.tobytes())
with open(f'model_iodata/{idx}_scores.bin', 'wb') as f:
f.write(scores.tobytes())
with open(f'model_iodata/{idx}_labels.bin', 'wb') as f:
f.write(labels.tobytes())
idx+=1
if __name__ == "__main__":
main()
my_eval.py
my_eval.py
import argparse
import os
import random
import numpy as np
import paddle
from paddle3d.apis.config import Config
from paddle3d.apis.trainer import Trainer
from paddle3d.slim import get_qat_config
from paddle3d.utils.checkpoint import load_pretrained_model
from paddle3d.utils.logger import logger
from paddle3d.apis.pipeline import training_step, validation_step
import inspect
class CustomTrainer(Trainer):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def evaluate(self):
msg = 'evaluate on validate dataset'
metric_obj = self.val_dataset.metric
# 获取对象的类
obj_class = metric_obj.__class__
# 获取类名
class_name = obj_class.__name__
# 获取类定义所在的文件
try:
file_path = inspect.getfile(obj_class)
except TypeError:
file_path = "内置类型或C扩展模块(无法获取文件路径)"
print(f"类名: {class_name}")
print(f"所在文件: {file_path}")
for idx, sample in self.logger.enumerate(self.eval_dataloader, msg=msg):
result = validation_step(self.model, sample)
img=sample['img'].numpy().astype(np.float32).reshape((1,6, 3, 320, 800))
img_metas = sample['meta']
img2lidars = []
for img_meta in img_metas:
img2lidar = []
for i in range(len(img_meta['lidar2img'])):
img2lidar.append(np.linalg.inv(img_meta['lidar2img'][i]))
img2lidars.append(np.asarray(img2lidar))
img2lidars = np.asarray(img2lidars).astype(np.float32)
with open(f'model_iodata/{idx}_img.bin', 'wb') as f:
f.write(img.tobytes())
with open(f'model_iodata/{idx}_img2lidars.bin', 'wb') as f:
f.write(img2lidars.tobytes())
metric_obj.update(predictions=result, ground_truths=sample)
metrics = metric_obj.compute(verbose=True)
return metrics
batch_size=1
cfg = Config(path='configs/petr/petr_vovnet_gridmask_p4_800x320.yml', batch_size=batch_size)
dic = cfg.to_dict()
batch_size = dic.pop('batch_size')
dic.update({'dataloader_fn': {
'batch_size': batch_size,
'num_workers': 1}})
load_pretrained_model(cfg.model, "model.pdparams")
dic['checkpoint'] = None
dic['resume'] = False
trainer = CustomTrainer(**dic)
trainer.evaluate()