J6M
OE3.7.0
其中我对比了qat.hbm以及qat.quant.bc的整体模型结构,输入输出等信息,发现是能对的上的,然后给定同一个输入,通过命令:
hrt_model_exec infer --model_file ./hbm/qat.hbm --input_file ./bin/img_0_y.bin,./bin/img_0_uv.bin,./bin/img_1_y.bin,./bin/img_1_uv.bin,./bin/img_2_y.bin,./bin/img_2_uv.bin,./bin/img_3_y.bin,./bin/img_3_uv.bin,./bin/img_4_y.bin,./bin/img_4_uv.bin,./bin/img_5_y.bin,./bin/img_5_uv.bin,./npy/lidar2img.npy,./bin/cached_anchor.bin,./bin/cached_feature.bin --enable_dump true --dump_format "txt" --dump_path ./hbm_board_txt
hrt_model_exec infer --model_file ./hbm/qat.hbm --input_file ./bin/img_0_y.bin,./bin/img_0_uv.bin,./bin/img_1_y.bin,./bin/img_1_uv.bin,./bin/img_2_y.bin,./bin/img_2_uv.bin,./bin/img_3_y.bin,./bin/img_3_uv.bin,./bin/img_4_y.bin,./bin/img_4_uv.bin,./bin/img_5_y.bin,./bin/img_5_uv.bin,./npy/lidar2img.npy,./bin/cached_anchor.bin,./bin/cached_feature.bin --enable_dump true --dump_format "txt" --dump_path ./hbm_board_txt
hrt_model_exec infer --model_file ./hbm/qat.quant.bc --input_file ./bin/img_0_y.bin,./bin/img_0_uv.bin,./bin/img_1_y.bin,./bin/img_1_uv.bin,./bin/img_2_y.bin,./bin/img_2_uv.bin,./bin/img_3_y.bin,./bin/img_3_uv.bin,./bin/img_4_y.bin,./bin/img_4_uv.bin,./bin/img_5_y.bin,./bin/img_5_uv.bin,./bin/lidar2img.bin,./bin/cached_anchor.bin,./bin/cached_feature.bin --enable_dump true --dump_format "txt" --dump_path ./bc_board_txt
获得两个模型的推理结果,为什么会对不上呢?
附件问题中是我qat.hbm以及qat.quant.bc的模型,以及他俩的输入,qat.bc我也放附件中了,请问我还需要从哪里进行排查呢?
其中qat.bc我也放附件中了
获得两个模型的推理结果,为什么会对不上呢?
附件问题中是我qat.hbm以及qat.quant.bc的模型,以及他俩的输入,qat.bc我也放附件中了,请问我还需要从哪里进行排查呢?
其中qat.bc我也放附件中了
