在Android中使用CameraX同时进行视频录制和帧处理

问题描述 投票:0回答:1

我正在开发一个应用程序,我的目标是检测面部,检测眨眼并同时录制视频。 因此,为此,我需要录制视频,同时我想处理帧以检测面部和眨眼。 我正在为此使用 Camerax。 这种情况有一些 Android 限制:

  1. 我们不能同时录制视频和使用cameraX的图像分析方法。
  2. .setClassificationMode(FaceDetectorOptions.CLASSIFICATION_MODE_ALL) 会减慢帧处理速度,因为 MLKit 文档中也提到了这一点,所以在我的情况下,因为我需要非常快的帧处理,所以我也不能使用它。

我正在使用的依赖项:

// CameraX dependencies
def camerax_version = "1.2.0"
implementation "androidx.camera:camera-camera2:${camerax_version}"
implementation "androidx.camera:camera-lifecycle:${camerax_version}"
implementation "androidx.camera:camera-view:${camerax_version}"
implementation "androidx.camera:camera-video:${camerax_version}"
// Google mMLKit to detect faces (unbundled)
implementation 'com.google.android.gms:play-services-mlkit-face-detection:17.1.0'

在 xml 文件中预览视图:

        <androidx.camera.view.PreviewView
        android:id="@+id/previewView"
        android:layout_width="match_parent"
        android:layout_height="match_parent" />

设置相机:

@Nullable
protected ProcessCameraProvider cameraProvider;
@Nullable
private Preview previewUseCase;
private VideoCapture<Recorder> videoCapture;
private CameraSelector cameraSelector;

private void initCamera() {
    try {
        cameraSelector = new CameraSelector.Builder().requireLensFacing(CameraSelector.LENS_FACING_FRONT).build();
        new ViewModelProvider(this, (ViewModelProvider.Factory)
                ViewModelProvider.AndroidViewModelFactory.getInstance(getApplication()))
                .get(CameraXViewModel.class)
                .getProcessCameraProvider()
                .observe(getViewLifecycleOwner(),
                        provider -> {
                            cameraProvider = provider;
                            bindAllCameraUseCases();
                        });
    } catch (Exception e) {
        
    }
}

/**
 * method to init binding camera use cases
 * will check camera provider, will unbind previous use cases
 * and will call further method to bind camera use cases
 */
private void bindAllCameraUseCases() {
    try {
        if (cameraProvider != null) {
            cameraProvider.unbindAll();
            if (cameraProvider == null) {
                return;
            }
            if (previewUseCase != null) {
                cameraProvider.unbind(previewUseCase);
            }
            bindCameraUseCases();
        }
    } catch (Exception e) {
        
    }
}

/**
 * method to bind camera use cases
 * will get camera provider instance
 * and will bind preview and video use case
 * invoking further method to set FaceMesh detection
 */
private void bindCameraUseCases() {
    try {
        Preview.Builder builder = new Preview.Builder();
        previewUseCase = builder.build();
        previewUseCase.setSurfaceProvider(fragmentCameraBinding.previewView.getSurfaceProvider());
        Recorder recorder = new Recorder.Builder()
                .setQualitySelector(QualitySelector.from(Quality.LOWEST, FallbackStrategy.higherQualityOrLowerThan(Quality.LOWEST))).build();
        videoCapture = VideoCapture.withOutput(recorder);
        new ViewModelProvider(this, (ViewModelProvider.Factory)
                ViewModelProvider.AndroidViewModelFactory.getInstance(getActivity().getApplication()))
                .get(CameraXViewModel.class)
                .getProcessCameraProvider()
                .observe(
                        getViewLifecycleOwner(),
                        provider -> {
                            cameraProvider = provider;
                            try {
                                cameraProvider.unbindAll();
                                try {
                                    Camera cameraX = cameraProvider.bindToLifecycle(getViewLifecycleOwner(), cameraSelector,
                                            previewUseCase, videoCapture);
                                } catch (Exception e) {
                                }
                            } catch (Exception e) {
                            }
                        });
    } catch (Exception e) {
    }
}

正如我提到的,我无法使用 ImageAnalysis 方法来获取帧,所以我像这样手动获取帧:

Bitmap bitmap = fragmentCameraBinding.previewView.getBitmap();

然后我使用这个位图进行面部检测并检测眨眼。 同时我也在录制视频,为此编写代码:

@Nullable
protected String storagePath = "";
private VideoCapture<Recorder> videoCapture;
private Recording currentRecording;
private File videoFile;
private final Consumer<VideoRecordEvent> videoCallback = new Consumer<androidx.camera.video.VideoRecordEvent>() {
    @Override
    public void accept(androidx.camera.video.VideoRecordEvent videoRecordEvent) {
        try {
            if (videoRecordEvent instanceof VideoRecordEvent.Start) {
                //video recording started
            } else if (videoRecordEvent instanceof VideoRecordEvent.Finalize) {
                //video recording stopped
            }
        } catch (Exception e) {
        }
    }
};
private void startRecording() {
    try {
        long timeStamp = System.currentTimeMillis();
        ContentValues contentValues = new ContentValues();
        contentValues.put(MediaStore.MediaColumns.DISPLAY_NAME, timeStamp);
        contentValues.put(MediaStore.MediaColumns.MIME_TYPE, "video/mp4");

        File directory = requireContext().getCacheDir();
        videoFile = null;
        try {
            videoFile = File.createTempFile(
                    "recorded_file",
                    ".mp4",
                    directory
            );
        } catch (IOException e) {
            e.printStackTrace();
        }
        if (videoFile != null) {
            storagePath = videoFile.getPath();
            FileOutputOptions fileOutputOptions = new FileOutputOptions.Builder(videoFile).build();
            currentRecording = videoCapture.getOutput().prepareRecording(requireActivity(), fileOutputOptions).start(
                    getExecutor(), videoCallback);
        }
    } catch (Exception e) {
    }
}

private Executor getExecutor() {
    return ContextCompat.getMainExecutor(context);
}

停止视频录制:

currentRecording.stop();

previewView.getBitmap,因为它与UI相关,所以我必须在主UI线程上进行。 这需要时间,而且,人脸检测也需要时间。 总的来说,我的代码每秒处理 3 到 4 帧,我想每秒处理 16-18 帧,因为设备每秒返回几乎 24-25 帧。

android android-camera android-camerax google-mlkit android-video-record
1个回答
0
投票

CameraX 支持在具有相机硬件级别的设备上并发进行视频采集和图像分析

LEVEL_3
。在较低级别的设备上,我相信如果您使用最新的 CameraX 版本(1.3.0-rc02),它还支持并发 VideoCapture 和 ImageAnalysis。然而,它是有代价的。 CameraX 在内部使用 OpenGL 将帧复制到 Preview 和 VideoCapture。因此,您可能会看到性能和系统运行状况下降。

正如您提到的,

PreviewView#getBitmap
效率低下。我们不建议这样做。

另一方面,MLKit 的人脸检测速度慢,是一个更难解决的问题。如果太慢,可以考虑跳帧。

© www.soinside.com 2019 - 2024. All rights reserved.