使用OpenGL将位图绘制到视频帧

问题描述 投票:0回答:1

我正在研究一个扩展Camera2Capturer的类,以便从摄像机获取帧,对其进行修改,然后将其反馈给观察者回调。我可以获取帧,将其转换为Bitmap,将其修改为所需的颜色,然后使用OpenGL将其绘制到我用capturerObserver.onFrameCaptured(videoFrame);

返回的新VideoFrame中

问题是,我新创建的videoFrame被拉伸了。当我检查位图时,它是正确的,但是绘制的视频帧在侧面拉伸。我在不同分辨率的不同设备上尝试过,但是到处都是一样的问题。

这是我的startCapture方法中的代码:

@Override
    public void startCapture(int width, int height, int fps) {
        super.startCapture(width, height, fps);
        this.width = width;
        this.height = height;

        captureThread = new Thread(() -> {

            final int[] textureHandle = new int[1];
            GLES20.glGenTextures(1, textureHandle, 0);
            Matrix matrix = new Matrix();
            matrix.postScale(1f, -1f);
            TextureBufferImpl buffer = new TextureBufferImpl(width, height, VideoFrame.TextureBuffer.Type.RGB, textureHandle[0], matrix, surTexture.getHandler(), yuvConverter, null);

            // Bind to the texture in OpenGL
            GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle[0]);

            try {
                while (true) {
                    surTexture.getHandler().post(() -> {
                        if (needsToRedrawFrame) {
                            VideoFrame lastFrameReceived = capturerObs.getLastFrameReceived();

                            //This is the bitmap I want to draw on the video frame
                            Bitmap bitmapToDraw = drawingCanvasView.getmBitmap();


                            //At this point, bitmmapToDraw contains the drawing and the frame captured from the camera overlayed
                            //Now we need to convert it to fit into the onFrameCaptured callback (requires a VideoFrame).

                            // Set filtering
                            GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
                            GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST);

                            // Load the bitmap into the bound texture.
                            GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmapToDraw, 0);

                            bitmapToDraw.recycle();

                            //The bitmap is drawn on the GPU at this point.

                            //We transfer it to the VideoFrame
                            VideoFrame.I420Buffer i420Buf = yuvConverter.convert(buffer);

                            VideoFrame videoFrame = new VideoFrame(i420Buf, 0, lastFrameReceived.getTimestampNs());

                            ogCapturerObserver.onFrameCaptured(videoFrame);
                            needsToRedrawFrame = false;
                        }
                    });

                    Thread.sleep(100);
                }
            } catch (Exception e) {
                LogHelper.logError(CapturerObserverProxy.class, "RMTEST THIS > " + e.getMessage(), e);
            }
        });
        captureThread.start();
    }

这是bitmapToDraw的样子:Bitmap

这是在SurfaceView上绘制的videoFrame的外观:VideoFrame

我到底想念什么?我一点都不熟悉OpenGL。

android opengl-es webrtc android-bitmap
1个回答
0
投票

结果显示框架已正确绘制。但是框架的分辨率与实际绘制的表面不同,因此可以拉伸。我不得不调整大小(同时保持宽高比!),我要绘制的位图。如果位图的大小与其在其上渲染的Surface的大小相同,则将不会拉伸它。

© www.soinside.com 2019 - 2024. All rights reserved.