多源相机 libargus、deepstream 6.1 中的绿屏

问题描述 投票:0回答:1

我正在重构一个 gstreamer 元素,该元素使用 nvidia 的 argus lib 来接收两个 CSI 摄像头 - 重构是从旧的 nvbuf_utils 到 NvUtils,https://developer.nvidia.com/sites/default/files/akamai/embedded/ nvbuf_utils_to_nvutils_migration_guide.pdf .

元素本身消耗> 2个CSI摄像机,同步帧并将它们发送到元素的src pad。在原始元素中,以下管道分别输出 1 个 1 帧视频和 1 个并排两帧视频。

单管道:

gst-launch-1.0 -v nvarguscameras sensors='0 1' silent=false sync-threshold=16700000 ! "video/x-raw(memory:NVMM),format=(string)NV12,width=(int)1920,height=(int)1080,framerate=(fraction)30/1" ! nvvideoconvert ! nvv4l2h264enc ! h264parse !  filesink location=single.mp4 -e

双管道:

gst-launch-1.0 -v nvarguscameras master-sensor=0 sensors='0 1' sync-threshold=16700000 silent=false ! nvvideoconvert ! "video/x-raw(memory:NVMM),format=(string)NV12,width=(int)1920,height=(int)1080,framerate=(fraction)30/1" ! nvmultistreamtiler width=3840 height=1080 rows= columns=2 ! nvvideoconvert ! 'video/x-raw(memory:NVMM), format=(string)I420' ! nvv4l2h264enc bitrate=800 ! h264parse ! qtmux ! filesink location="test.mp4" -e

在原始实现中,API 使用

NvBufferTransform
对框架进行一些修改。然而,这次我只想传递帧。

作为参考,

frameInfo
是保存来自另一个线程的缓冲区的宽度、高度和fd的结构。

原始实现:

      // copy buffer surface 
      GST_DEBUG_OBJECT (src, "consumer prepare buffer surfaces"); 
      GstMapInfo outmap = GST_MAP_INFO_INIT;
      gst_buffer_map (buffer, &outmap, GST_MAP_WRITE);
      NvBufSurface* surf = (NvBufSurface*) outmap.data;
      assert(surf->batchSize == src->sensors_num);
      
      assert(frames_buffer->len <= src->sensors_num); 
      surf->numFilled = 0; 
      for (int i = 0; i < frames_buffer->len; i++) {
        GST_DEBUG_OBJECT (src, "consumer fill buffer surface %d", i); 
        NvArgusFrameInfo *frameInfo = g_array_index(frames_buffer, NvArgusFrameInfo*, i);
        gint retn = NvBufferTransform (frameInfo->fd, (gint)surf->surfaceList[i].bufferDesc, &src->transform_params);
        if (retn != 0) {
          GST_ERROR_OBJECT(src, "consumer NvBufferTransform Failed");
          break;
        }
        surf->numFilled++; 
        // attach frame meta 
        NvDsFrameMeta *frame_meta = nvds_acquire_frame_meta_from_pool(batch_meta);

        frame_meta->pad_index = i;
        frame_meta->batch_id = i;
        frame_meta->frame_num = frameInfo->frameNum;
        frame_meta->buf_pts = frameInfo->frameTime;
        frame_meta->ntp_timestamp = 0;
        frame_meta->source_id = src->sensors[i];
        frame_meta->num_surfaces_per_frame = 1; 
        frame_meta->source_frame_width = frameInfo->width;
        frame_meta->source_frame_height = frameInfo->height;
        nvds_add_frame_meta_to_batch(batch_meta, frame_meta);
      }
      gst_buffer_unmap (buffer, &outmap);
      if (surf->numFilled != frames_buffer->len || 
          batch_meta->num_frames_in_batch != frames_buffer->len) {
        GST_ERROR_OBJECT(src, "consumer failed fill nvmm buffer");
        break;
      }   
    }

更新实施:

      // copy buffer surface 
      GST_DEBUG_OBJECT (src, "consumer prepare buffer surfaces"); 
      GstMapInfo outmap = GST_MAP_INFO_INIT;
      if (!gst_buffer_map (buffer, &outmap, GST_MAP_WRITE)){
        g_print ("Error: Failed to map gst buffer\n");
        break;
      }

      NvBufSurface* surf = (NvBufSurface*) outmap.data;

      assert(surf->batchSize == src->sensors_num);
      assert(frames_buffer->len <= src->sensors_num); 
      surf->numFilled = 0; 



      for (int i = 0; i < frames_buffer->len; i++) {
        
        NvArgusFrameInfo *frameInfo = g_array_index(frames_buffer, NvArgusFrameInfo*, i);
        surf->surfaceList[i].bufferDesc = frameInfo->fd;

        surf->numFilled++;

        NvDsFrameMeta *frame_meta = nvds_acquire_frame_meta_from_pool(batch_meta);

        frame_meta->pad_index = i;
        frame_meta->batch_id = i;
        frame_meta->frame_num = frameInfo->frameNum;
        frame_meta->buf_pts = frameInfo->frameTime;
        frame_meta->ntp_timestamp = 0;
        frame_meta->source_id = src->sensors[i];
        frame_meta->num_surfaces_per_frame = 1; 
        frame_meta->source_frame_width = frameInfo->width;
        frame_meta->source_frame_height = frameInfo->height;
        nvds_add_frame_meta_to_batch(batch_meta, frame_meta);
        

      }
        
      gst_buffer_unmap (buffer, &outmap);
      if (surf->numFilled != frames_buffer->len || 
          batch_meta->num_frames_in_batch != frames_buffer->len) {
        GST_ERROR_OBJECT(src, "consumer failed fill nvmm buffer");
        break;
      }   
    }

单个管道工作正常,我可以获得一帧的视频,但是当我尝试第二个管道时,我只得到一个绿色视频。大家有什么建议吗?

我尝试使用

NvBufSurfTransform
而不是仅将 fd 传递给
NvBufSurface
,但这会产生相同的结果。

这两个帧在我更新的代码中都可用,因为我可以在 for 循环中进行硬编码以选择单个视频管道中存在哪一个。

c++ camera gstreamer nvidia deepstream
1个回答
0
投票

Jonny,你的问题最后是怎么解决的?我遇到了同样的问题,并努力让它发挥作用。

© www.soinside.com 2019 - 2024. All rights reserved.