Gstreamer 图像叠加无法基于秒工作

问题描述 投票:0回答:1

我在Python中有这个管道,我用它在mp4视频上添加图像,但输出视频与输入视频相同,当我使用时

GST_DEBUG=3
我得到了

0:00:00.046586337 11711 0x279d380 WARN gdkpixbufoverlay gstgdkpixbufoverlay.c:562:gst_gdk_pixbuf_overlay_start:<gdkpixbufoverlay0> no image location set, doing nothing
0:00:00.047215851 11711 0x2766360 FIXME videodecoder gstvideodecoder.c:1193:gst_video_decoder_drain_out:<pngdec0> Sub-class should implement drain()
0:00:00.047218585 11711 0x279d380 WARN basesrc gstbasesrc.c:3688:gst_base_src_start_complete:<filesrc0> pad not activated yet
0:00:00.055638677 11711 0x263df00 WARN qtdemux qtdemux_types.c:249:qtdemux_type_get: unknown QuickTime node type sgpd
0:00:00.055691634 11711 0x263df00 WARN qtdemux qtdemux_types.c:249:qtdemux_type_get: unknown QuickTime node type sbgp
0:00:00.055736798 11711 0x263df00 WARN qtdemux qtdemux.c:3121:qtdemux_parse_trex:<qtdemux0> failed to find fragment defaults for stream 1
0:00:00.055879661 11711 0x263df00 WARN qtdemux qtdemux.c:3121:qtdemux_parse_trex:<qtdemux0> failed to find fragment defaults for stream 2
0:00:00.057660594 11711 0x2766360 WARN videodecoder gstvideodecoder.c:2816:gst_video_decoder_chain:<pngdec0> Received buffer without a new-segment. Assuming timestamps start from 0.
0:00:00.058040800 11711 0x2766360 WARN video-info video-info.c:760:gst_video_info_to_caps: invalid matrix 0 for RGB format, using RGB
0:00:00.205414894 11711 0x2766400 WARN audio-resampler audio-resampler.c:274:convert_taps_gint16_c: can't find exact taps
0:00:01.263245091 11711 0x27661e0 FIXME basesink gstbasesink.c:3395:gst_base_sink_default_event:<filesink0> stream-start event without group-id. Consider implementing group-id handling in the upstream elements
0:00:01.264908606 11711 0x27661e0 FIXME aggregator gstaggregator.c:1410:gst_aggregator_aggregate_func:<mux> Subclass should call gst_aggregator_selected_samples() from its aggregate implementation.
DEBUG:root:Position: 2.7s / 53.0s
DEBUG:root:Position: 4.333333333s / 53.0s

任何人都可以帮助我吗,我对 GStreamer 还很陌生。这是我的管道代码和文件夹结构如下

def start_pipeline(video_file_path: str, output_file_path: str) -> None:
    Gst.init(None)

    # GStreamer pipeline for adding image overlay to a video
    pipeline_string = (
        f"filesrc location={video_file_path} ! decodebin name=dec "
        f"dec. ! queue ! videoconvert ! x264enc ! queue ! mp4mux name=mux ! filesink location={output_file_path} "
        f'multifilesrc location=images/image_%06d.png index=1 caps="image/png,framerate=(fraction)30/1" ! pngdec ! videoconvert ! gdkpixbufoverlay ! queue ! x264enc ! queue ! mux. '
        f"dec. ! queue ! audioconvert ! audioresample ! voaacenc ! queue ! mux. "
    )
    pipeline = Gst.parse_launch(pipeline_string)

    # Set up bus to receive messages
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect("message", on_bus_message, GLib.MainLoop.new(None, False))

    # Start the pipeline
    pipeline.set_state(Gst.State.PLAYING)

    # Run the main loop
    loop = GLib.MainLoop()
    # Add a timeout callback to check the progress every second
    GLib.timeout_add_seconds(1, on_timeout, pipeline, loop)

    loop.run()
    loop.quit()
    exit("Done")
.
├── images
│ ├── image_000000.png
│ ├── image_000001.png
│ ├── image_000002.png
│ ├── image_000003.png
│ ├── image_000004.png
│ ├── image_000005.png
│ ├── image_000006.png
│ ├── image_000007.png
│ ├── image_000008.png
│ └── image_000009.png
├── input.mp4
├── requirements.txt
├── stream.py
python gstreamer python-gstreamer
1个回答
0
投票

我用这个解决了这个问题

def update_overlay_location(pipeline, overlay):
    on_timeout(pipeline)

    # Get the current position in the pipeline
    success, position = pipeline.query_position(Gst.Format.TIME)
    if not success:
        logging.info("Failed to get the position, using the default image.")
        # use the default image
        image_file_path = f"images/image_000000.png"
    else:
        # Convert position from nanoseconds to seconds
        position_seconds = position // Gst.SECOND
        image_file_path = f"images/image_{position_seconds:06}.png"

    # Update the image file path
    overlay.set_property("location", image_file_path)

    return True  # Continue calling this function


def start_pipeline(video_file_path: str, output_file_path: str) -> None:
    Gst.init(None)

    pipeline_string = (
        f"filesrc location={video_file_path} ! decodebin name=dec "
        f"dec. ! queue ! videoconvert ! gdkpixbufoverlay name=overlay location=images/image_000000.png ! x264enc ! queue ! mp4mux name=mux ! filesink location={output_file_path} "
        f"dec. ! queue ! audioconvert ! audioresample ! voaacenc ! queue ! mux. "
    )
    pipeline = Gst.parse_launch(pipeline_string)

    # Get the overlay element
    overlay = pipeline.get_by_name("overlay")

    # Set up bus to receive messages
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect("message", on_bus_message, GLib.MainLoop.new(None, False))

    # Start the pipeline
    pipeline.set_state(Gst.State.PLAYING)

    # Run the main loop
    loop = GLib.MainLoop()

    # Add a timeout callback to update the overlay location
    GLib.timeout_add(100, update_overlay_location, pipeline, overlay)

    loop.run()
    loop.quit()
    exit("Done")

update_overlay_location
函数每 100 毫秒调用一次,以更新覆盖在视频帧上的图像

© www.soinside.com 2019 - 2024. All rights reserved.