如何获取扬声器输出的媒体流,以通过网络将其传输到 Microsoft 认知服务,以实现实时语音转文本

问题描述 投票:0回答:1

困难似乎在于访问扬声器,而不是实际的 JS 语音 SDK 代码。 如果我能以某种方式将扬声器放入 MediaStream 中,那么我就可以使用代码 AudioConfig.fromStreamInput(myMediaStream);设置音频转录的输入。

我找到了相关的东西如何获取扬声器输出的媒体流以通过网络传输或录制它?

接受的解决方案返回一个错误,沿着speaker.addTrack(stream.getAudioTracks()[0].clone());“无法读取未定义的属性(读取'克隆')”

以下是我的实现

 const getAudioStream = async () => {
    const speaker = new MediaStream;
    if (navigator.getDisplayMedia) {
        navigator.getDisplayMedia({
            video: true ,
            audio: true
        }).then(stream => {
            speaker.addTrack(stream.getAudioTracks()[0].clone());
            // stopping and removing the video track to enhance the performance
            stream.getVideoTracks()[0].stop();
            stream.removeTrack(stream.getVideoTracks()[0]);
        }).catch((error) => {
            console.log(error);
        });
    } else if (navigator.mediaDevices.getDisplayMedia) {
        navigator.mediaDevices.getDisplayMedia({
            video: true ,
            audio: true
        }).then(stream => {
            speaker.addTrack(stream.getAudioTracks()[0].clone());
            // stopping and removing the video track to enhance the performance
            stream.getVideoTracks()[0].stop();
            stream.removeTrack(stream.getVideoTracks()[0]);
        }).catch((error) => {
            console.log(error);
        });
    }
   return speaker;
}

useEffect(() => { const startStreaming = async () => { const speaker = await getAudioStream(); const audioConfig = AudioConfig.fromStreamInput(speaker); speechrecognizer.audioConfig = audioConfig; } startStreaming(); }, []);

我在过去的项目中实现了一个钩子,记录来自扬声器和麦克风的流

import { useEffect, useRef, useState } from 'react';

export const useAudioRecorder = () => { const [isRecording, setIsRecording] = useState(false); const [audioBlob, setAudioBlob] = useState(null); const mediaRecorder = useRef(null); const audioStream = useRef(null);

const startRecording = async () => { try { const microphoneStream = await navigator.mediaDevices.getUserMedia({ audio: true }); const speakerStream = await       navigator.mediaDevices.getUserMedia({ audio: { echoCancellation: false } });
  const audioContext = new AudioContext();
  const microphoneSource = audioContext.createMediaStreamSource(microphoneStream);
  const speakerSource = audioContext.createMediaStreamSource(speakerStream);
  const mixedOutput = audioContext.createMediaStreamDestination();

  microphoneSource.connect(mixedOutput);
  speakerSource.connect(mixedOutput);

  audioStream.current = mixedOutput.stream;
  mediaRecorder.current = new MediaRecorder(audioStream.current);
  
  mediaRecorder.current.ondataavailable = handleDataAvailable;
  mediaRecorder.current.start();
  
  setIsRecording(true);
} catch (error) {
  console.error('Error starting recording:', error);
}
};

const stopRecording = () => { if (mediaRecorder.current && isRecording) { mediaRecorder.current.stop(); setIsRecording(false); } };

const handleDataAvailable = (event) => { const audioBlob = new Blob([event.data], { type: 'audio/wav' }); setAudioBlob(audioBlob); };

useEffect(() => () => { // Clean up the streams when the component unmounts if (audioStream.current) { audioStream.current.getTracks().forEach(track => track.stop()); } }, []);

return { isRecording, audioBlob, startRecording, stopRecording }; };

尝试在我的 getAudioStream 函数中使用类似的方法,但它只获取麦克风的而不是扬声器的

javascript reactjs azure-cognitive-services
1个回答
0
投票

没有专门使用 Web API 捕获系统音频的指定方法。

Firefox 公开了

getUserMedia()
的监控设备,而基于 Chromium 的浏览器则没有。

5 年后,基于 Chromium 的浏览器现在可以使用

getDisplayMedia()
捕获系统音频 通过命令行开关
--enable-features=PulseaudioLoopbackForScreenShare
,请参阅[Linux]系统环回音频捕获。有关在基于 Chromium 的浏览器上捕获系统音频的一些参考,请参阅captureSystemAudio References

我发现 被捕获设备的体积减少至8%。使用

--disable-features=WebRtcAllowInputVolumeAdjustment

var stream = await navigator.mediaDevices.getDisplayMedia({
  // We're not going to be using the video track
  video: {
    width: 0,
    height: 0,
    frameRate: 0,
    displaySurface: "monitor",
  },
  audio: {
    suppressLocalAudioPlayback: false,
    // Speech synthesis audio output is generally 1 channel
    channelCount: 2,
    noiseSuppression: false,
    autoGainControl: false,
    echoCancellation: false,
  },
  systemAudio: "include",
  // Doesn't work for Tab capture
  // preferCurrentTab: true
});

var [audioTrack] = stream.getAudioTracks();

stream.getVideoTracks()[0].stop();

要依赖您自己的代码,您可以将输出设备重新映射到输入设备。 在这里,我已重新映射到默认监视器(来自扬声器和耳机的“What-U-Hear”) 到默认输入设备

pactl load-module module-remap-source \
  master=@DEFAULT_MONITOR@ \
  source_name=speakers source_properties=device.description=Speakers \
&& pactl set-default-source speakers

之后默认的输入设备是扬声器,那么我就可以使用

getUserMedia()

var stream = await navigator.mediaDevices.getUserMedia({
  audio: {
    channelCount: 2,
    sampleRate: 44100,
    noiseSuppression: false,
    autoGainControl: false,
    echoCancellation: false,
  }
});
var [audioTrack] = stream.getAudioTracks();

您必须弄清楚如何将输出设备重新映射到您的输入设备 操作系统。这里已经完成了一些工作Screenshare-with-audio-on-Discord-with-Linux

您可以进一步采用该方法来创建接收器输入 仅捕获特定设备,例如语音调度程序的输出,请参阅 Chromium 默认不支持捕获监控设备#17

pactl load-module module-combine-sink \
sink_name=Web_Speech_Sink slaves=$(pacmd list-sinks | grep -A1 "* index" | grep -oP "<\K[^ >]+") \
sink_properties=device.description="Web_Speech_Stream" \
format=s16le \
channels=1 \
rate=22050
pactl load-module module-remap-source \
master=Web_Speech_Sink.monitor \
source_name=Web_Speech_Monitor \
source_properties=device.description=Web_Speech_Output
pactl move-sink-input $(pacmd list-sink-inputs | tac | perl -E'undef$/;$_=<>;/speech-dispatcher-espeak-ng.*?index: (\d+)\n/s;say $1') Web_Speech_Sink

然后做类似的事情

const devices = await navigator.mediaDevices.enumerateDevices();
  const device = devices.find(({label}) => label === 'Web_Speech_Output');
  if (track.getSettings().deviceId === device.deviceId) {
    return stream;
  } else {
    track.stop();
    console.log(devices, device);
    return navigator.mediaDevices.getUserMedia({audio: {deviceId: {exact: device.deviceId}}});
  }
})
© www.soinside.com 2019 - 2024. All rights reserved.