从 Web 上的 WebSockets 端点进行音频播放

问题描述 投票:0回答:1

我有一个带有 FastAPI 后端和 Web 前端的 Web 应用程序。我希望能够通过客户端上的 WebSockets 播放音频。原因是用户通过 WebSocket 进行大量其他交互,我只想使用一个端点来跟踪状态。

问题:通过 WebSockets 的音频非常不稳定,而当我通过流/音频请求加载相同的逻辑时,一切都很好。所以我希望使它们等效。

这是后端:

@router.websocket('/audio_ws')
async def audio_sockets(ws: WebSocket):
    await ws.accept()

    file = wave.open('my_file.wav', 'rb')
    CHUNK = 1024

    with open('paper_to_audio/data/paper.wav', 'rb') as file_like:
        while True:
            next = file_like.read(CHUNK)
            if next == b'':
                file_like.close()
                break
            await ws.send_bytes(next)


@router.get("/audio")
def read_audio():
    def iterfile():
        CHUNK = 1024
        with open('my_file.wav', 'rb') as file_like:
            while True:
                next = file_like.read(1024)
                if next == b'':
                    file_like.close()
                    break
                yield next
    return StreamingResponse(iterfile(), media_type="audio/wav")

可以看到,读取文件的逻辑基本一致。

这是从流中读取的前端代码,它的作用就像一个魅力:

<audio preload="none" controls id="audio">
   <source src="/audio" type="audio/wav">
</audio>

这是用于从 WebSocket 读取数据的 JavaScript:

        function playAudioFromBackend() {
            const sample_rate = 44100; // Hz

            // Websocket url
            const ws_url = "ws://localhost:8000/audio_ws"

            let audio_context = null;
            let ws = null;

            async function start() {
                if (ws != null) {
                    return;
                }

                // Create an AudioContext that plays audio from the AudioWorkletNode  
                audio_context = new AudioContext();
                await audio_context.audioWorklet.addModule('audioProcessor.js');
                const audioNode = new AudioWorkletNode(audio_context, 'audio-processor');
                audioNode.connect(audio_context.destination);

                // Setup the websocket 
                ws = new WebSocket(ws_url);
                ws.binaryType = 'arraybuffer';

                // Process incoming messages
                ws.onmessage = (event) => {
                    // Convert to Float32 lpcm, which is what AudioWorkletNode expects
                    const int16Array = new Int16Array(event.data);
                    let float32Array = new Float32Array(int16Array.length);
                    for (let i = 0; i < int16Array.length; i++) {
                        float32Array[i] = int16Array[i] / 32768.;
                    }

                    // Send the audio data to the AudioWorkletNode
                    audioNode.port.postMessage({ message: 'audioData', audioData: float32Array });
                }

                ws.onopen = () => {
                    console.log('WebSocket connection opened.');
                };

                ws.onclose = () => {
                    console.log('WebSocket connection closed.');
                };

                ws.onerror = error => {
                    console.error('WebSocket error:', error);
                };
            }

            async function stop() {
                console.log('Stopping audio');
                if (audio_context) {
                    await audio_context.close();
                    audio_context = null;
                    ws.close();
                    ws = null;
                }
            }

            start()

以及相关的工作集:

class AudioProcessor extends AudioWorkletProcessor {

    constructor() {
        super();
        this.buffer = new Float32Array();

        // Receive audio data from the main thread, and add it to the buffer
        this.port.onmessage = (event) => {
            let newFetchedData = new Float32Array(this.buffer.length + event.data.audioData.length);
            newFetchedData.set(this.buffer, 0);
            newFetchedData.set(event.data.audioData, this.buffer.length);
            this.buffer = newFetchedData;
        };
    }

    // Take a chunk from the buffer and send it to the output to be played
    process(inputs, outputs, parameters) {
        const output = outputs[0];
        const channel = output[0];
        const bufferLength = this.buffer.length;
        for (let i = 0; i < channel.length; i++) {
            channel[i] = (i < bufferLength) ? this.buffer[i] : 0;
        }
        this.buffer = this.buffer.slice(channel.length);
        return true;
    }
}

registerProcessor('audio-processor', AudioProcessor);

我做错了什么?为什么声音断断续续?

javascript audio websocket audiocontext
1个回答
0
投票

您的代码似乎没有使用

sample_rate
变量。您的音频可能为 44.1kHz,但
AudioContext
的运行频率为 48kHz。如果您不考虑这一点,将没有足够的样本来填充音频输出缓冲区。

避免这种情况的最简单方法是强制

AudioContext
以所需的采样率运行。然后它将在内部重新采样。

audio_context = new AudioContext({
    sampleRate: sample_rate
});

也有可能

WebSocket
连接未按所需速率发送音频。

© www.soinside.com 2019 - 2024. All rights reserved.