[尝试通过WebRtc流位图。我的Capturer类大致如下所示:
public class BitmapCapturer implements VideoCapturer, VideoSink {
private Capturer capturer;
private int width;
private int height;
private SurfaceTextureHelper textureHelper;
private Context appContext;
@Nullable
private CapturerObserver capturerObserver;
@Override
public void initialize(SurfaceTextureHelper surfaceTextureHelper,
Context context, CapturerObserver capturerObserver) {
if (capturerObserver == null) {
throw new RuntimeException("capturerObserver not set.");
} else {
this.appContext = context;
this.textureHelper = surfaceTextureHelper;
this.capturerObserver = capturerObserver;
this.capturer = new Capturer();
this.textureHelper.startListening(this);
}
}
@Override
public void startCapture(int width, int height, int fps) {
this.width = width;
this.height = height;
long start = System.nanoTime();
textureHelper.setTextureSize(width, height);
int[] textures = new int[1];
GLES20.glGenTextures(1, textures, 0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textures[0]);
Matrix matrix = new Matrix();
matrix.preTranslate(0.5f, 0.5f);
matrix.preScale(1f, -1f);
matrix.preTranslate(-0.5f, -0.5f);
YuvConverter yuvConverter = new YuvConverter();
TextureBufferImpl buffer = new TextureBufferImpl(width, height,
VideoFrame.TextureBuffer.Type.RGB, textures[0], matrix,
textureHelper.getHandler(), yuvConverter, null);
this.capturerObserver.onCapturerStarted(true);
this.capturer.startCapture(new ScreenConfig(width, height),
new CapturerCallback() {
@Override
public void onFrame(Bitmap bitmap) {
textureHelper.getHandler().post(() -> {
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST);
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);
long frameTime = System.nanoTime() - start;
VideoFrame videoFrame = new VideoFrame(buffer.toI420(), 0, frameTime);
capturerObserver.onFrameCaptured(videoFrame);
videoFrame.release();
});
}
});
}
@Override
public void onFrame(VideoFrame videoFrame) {
capturerObserver.onFrameCaptured(videoFrame);
}
@Override
public void stopCapture() throws InterruptedException {
}
@Override
public void changeCaptureFormat(int i, int i1, int i2) {
}
@Override
public void dispose() {
}
@Override
public boolean isScreencast() {
return true;
}}
结果流的外观如下:
下面,我将尝试给出到目前为止的实验结果。
如果框架旋转到90度-流看起来正常。
VideoFrame videoFrame = new VideoFrame(buffer.toI420(), 90, frameTime);
试图交换TextureBuffer的大小
TextureBufferImpl buffer = new TextureBufferImpl(height, width,
VideoFrame.TextureBuffer.Type.RGB, textures[0], matrix,
textureHelper.getHandler(), yuvConverter, null);
也尝试将高度作为宽度和高度同时传递
TextureBufferImpl buffer = new TextureBufferImpl(height, height,
VideoFrame.TextureBuffer.Type.RGB, textures[0], matrix,
textureHelper.getHandler(), yuvConverter, null);
我在这里很困惑。尽管所有尺寸都设置为垂直,但WebRtc似乎以某种方式期望水平框架。我尝试将WebRtc库中的所有帧缓冲区大小记录到视频编码器中,并且它们是正确的。这个问题似乎与转换方法无关,因为除此之外,我尝试使用libyuv中的ARGBToI420进行转换。产生的结果是相同的。
非常感谢您的协助
[图像格式是RGB,并且图像的宽度似乎不能被4整除。因此,当将图像加载到纹理对象时,必须将GL_UNPACK_ALIGNMENT
设置为1:]
GL_UNPACK_ALIGNMENT
默认情况下,GLES20.pixelStorei(GLES20.GL_UNPACK_ALIGNMENT, 1);
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);
为4,因此假定图像的每一行都对齐4个字节。由于图像数据被紧紧打包,并且每个像素的大小为3个字节,因此行的开始可能未对齐,具体取决于图像的宽度。因此,如果图像的宽度可被4整除,则无需更改对齐方式。