MediaPipe 缓冲区太小

问题描述 投票:0回答:1

我尝试使用 Kotlin 中的 CameraX 和 MediaPipe 创建具有人脸检测功能的实时摄像头源。 遗憾的是,我得到的错误是我的缓冲区对于像素来说太小。 我执行的主要函数发生在

CameraScreen
可组合项中。

 fun setUpCamera() {
    val cameraProviderFuture = ProcessCameraProvider.getInstance(context)
    cameraProviderFuture.addListener({
        val cameraProvider: ProcessCameraProvider = cameraProviderFuture.get()
        val cameraSelector = CameraSelector.DEFAULT_FRONT_CAMERA

        val screenSize = Size(640, 480)
        val resolutionSelector = ResolutionSelector.Builder().setResolutionStrategy(
            ResolutionStrategy(screenSize,
            ResolutionStrategy.FALLBACK_RULE_NONE)
        ).build()

        // Build and bind camera use cases
        val preview = Preview.Builder()
            .setResolutionSelector(resolutionSelector)
            .build().also {
            it.setSurfaceProvider(preview.surfaceProvider)
        }

        val imageAnalyzer = ImageAnalysis.Builder()
            .setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
            .build()
            .also {
                it.setAnalyzer(executor, faceDetectorHelper::detectLivestreamFrame)
            }

        cameraProvider.unbindAll()
        try {
            cameraProvider.bindToLifecycle(lifecycleOwner, cameraSelector, preview, imageAnalyzer)
        } catch (exc: Exception) {
            android.util.Log.e("CameraFragment", "Use case binding failed", exc)
        }
    }, ContextCompat.getMainExecutor(context))
}

在我的

FaceDetectorHelper
中我有这个:

fun detectLivestreamFrame(imageProxy: ImageProxy) {

    if (runningMode != RunningMode.LIVE_STREAM) {
        throw IllegalArgumentException(
            "Attempting to call detectLivestreamFrame" +
                    " while not using RunningMode.LIVE_STREAM"
        )
    }

    val frameTime = SystemClock.uptimeMillis()

    // Copy out RGB bits from the frame to a bitmap buffer
    val bitmapBuffer =
        Bitmap.createBitmap(
            imageProxy.width,
            imageProxy.height,
            Bitmap.Config.ARGB_8888
        )
    imageProxy.use { bitmapBuffer.copyPixelsFromBuffer(imageProxy.planes[0].buffer) }
    imageProxy.close()
    // Rotate the frame received from the camera to be in the same direction as it'll be shown
    val matrix =
        Matrix().apply {
            postRotate(imageProxy.imageInfo.rotationDegrees.toFloat())

            // postScale is used here because we're forcing using the front camera lens
            // This can be set behind a bool if the camera is togglable.
            // Not using postScale here with the front camera causes the horizontal axis
            // to be mirrored.
            postScale(
                -1f,
                1f,
                imageProxy.width.toFloat(),
                imageProxy.height.toFloat()
            )
        }

    val rotatedBitmap =
        Bitmap.createBitmap(
            bitmapBuffer,
            0,
            0,
            bitmapBuffer.width,
            bitmapBuffer.height,
            matrix,
            true
        )

    // Convert the input Bitmap face to an MPImage face to run inference
    val mpImage = BitmapImageBuilder(rotatedBitmap).build()

    detectAsync(mpImage, frameTime)
}

代码执行时出现问题

imageProxy.use { bitmapBuffer.copyPixelsFromBuffer(imageProxy.planes[0].buffer) }

有人可以帮我吗? 如果您需要有关代码的更多信息,请随时说出,我会将其添加到帖子中。

提前谢谢您

android kotlin face-detection android-camerax mediapipe
1个回答
0
投票

ImageAnalysis 的 ImageProxy 是 YUV 格式。我们不能将其直接复制到位图。如果您想从 ImageProxy 创建位图,请使用 ImageProxy#toBitmap API。

© www.soinside.com 2019 - 2024. All rights reserved.