我正在创建一个具有对象检测和捕获图像的应用程序。我按照 https://www.tensorflow.org/lite/android/tutorials/object_detection 和 https://github.com/tensorflow/examples/tree/master/lite/examples/object_detection/android 作为示例代码。
class PlotBoundingBox(
context: Context,
private val detectedObject: List<Detection>,
private val imageHeight: Int,
private val imageWidth: Int
) : View(context) {
private var boundaryPaint = Paint()
var scaleFactor = 1f
init {
init()
}
private fun init() {
boundaryPaint.color = Red.hashCode()
boundaryPaint.strokeWidth = 5f
boundaryPaint.style = Paint.Style.STROKE
}
@SuppressLint("DrawAllocation")
override fun onDraw(canvas: Canvas) {
super.onDraw(canvas)
scaleFactor = max(width * 1f / imageWidth, height * 1f / imageHeight)
for (result in detectedObject) {
val boundingBox = result.boundingBox
val top = boundingBox.top * scaleFactor
val bottom = boundingBox.bottom * scaleFactor
val left = boundingBox.left * scaleFactor
val right = boundingBox.right * scaleFactor
// Draw bounding box around detected objects
val drawableRect = RectF(left, top, right, bottom)
canvas.drawRect(drawableRect, boundaryPaint)
}
}
}
我能够正确绘制边界框,但是当我捕获图像时,图像与预览视图中的显示不一样(与预览视图中的显示相比,捕获的图像被调整到左侧)。所以我决定将比例设置为 FILL_CENTER,但与使用 FILL_START 不同,边界框没有正确绘制。以下是我对我的实现代码的参考。我对边界框绘图的参考在这里https://github.com/tensorflow/examples/blob/master/lite/examples/object_detection/android/app/src/main/java/org/tensorflow/lite/examples /objectdetection/OverlayView.kt.