oe3.5.0
实验超参: calibration_step: 500, observer: mse
calib_float是calibration与浮点对比分析结果,qua_float是qat与浮点对比分析结果
abnormal_layer_advisor 、compare_per_layer_out
超大数据范围"Total data range 130925.09375 maybe too large for quantization."
目前对应此信息,修改的是 hat/models/task_modules/sparsebevoe/blocks.py
"""
DeformableFeatureAggregationOE / project_points函数
depth_clamped = torch.clamp(points_2d[..., 2:3], min=0.01)
depth = self.reciprocal_op(depth_clamped)
depth = torch.clamp(depth, max=100.0)
xy = points_2d[..., :2]
points_2d = self.point_mul.mul(xy, depth)
points_2d = torch.clamp(points_2d, -1.1, 1.1)
"""
另外set_qconfig函数内也做了对应的修改,看表现存在没有生效的模块,是什么原因呢,是配置的地方不对还是整体解决思路有问题
"Current scale does not cover the data range" 上游的巨大误差传播到下游,导致会发生截断等操作,感觉根因还是上述配置没有生效导致的

