如何为录制的音频添加几秒钟的静音(使用 broser 的 MediaRecorder)

问题描述 投票:0回答:0

我正在用 javascript 录制音频并在屏幕上显示结果,如下所示:

navigator.mediaDevices.getUserMedia({ audio: true }).then((stream) => {
      const mediaRecorderInstance = new MediaRecorder(stream);

      mediaRecorderInstance.start();      

      let audioChunks: Blob[] = [];
      mediaRecorderInstance.ondataavailable = (e) => {
        audioChunks.push(e.data);
      };

      // onstop -->play
      mediaRecorderInstance.onstop = async (e) => {                 
        const audioBlob = new Blob(audioChunks, { type: "audio/webm" });
        const audioUrl = URL.createObjectURL(audioBlob);
        const audio = new Audio(audioUrl);      
         audio.play();
      };

      setTimeout(() => {mediaRecorderInstance.stop()}, 4000) // record for 4s
})

我想做的是用“无”来“填充”录音的开始,就像3秒的沉默。

我学会了如何创建空声音(我给出持续时间,它会生成无声音频),如下所示:

function getSilentAudioBuffer(time: number) {
    const FREQ = 44100;
    const duration = time * FREQ;
    const AudioContext = window.AudioContext;
    if (!AudioContext) {
      console.log("No Audio Context");
    }
    const context = new AudioContext();
    const audioBuffer = context.createBuffer(1, duration, FREQ);

    let numOfChan = audioBuffer.numberOfChannels,
      length = duration * numOfChan * 2 + 44,
      buffer = new ArrayBuffer(length),
      view = new DataView(buffer),
      channels = [],
      i,
      sample,
      offset = 0,
      pos = 0;

    // write WAVE header
    setUint32(0x46464952);
    setUint32(length - 8);
    setUint32(0x45564157);

    setUint32(0x20746d66);
    setUint32(16);
    setUint16(1);
    setUint16(numOfChan);
    setUint32(audioBuffer.sampleRate);
    setUint32(audioBuffer.sampleRate * 2 * numOfChan);
    setUint16(numOfChan * 2);
    setUint16(16);

    setUint32(0x61746164);
    setUint32(length - pos - 4);

    // write interleaved data
    for (i = 0; i < audioBuffer.numberOfChannels; i++) channels.push(audioBuffer.getChannelData(i));

    while (pos < length) {
      for (i = 0; i < numOfChan; i++) {
        // interleave channels
        sample = Math.max(-1, Math.min(1, channels[i][offset])); // clamp
        sample = (0.5 + sample < 0 ? sample * 32768 : sample * 32767) | 0; // scale to 16-bit signed int
        view.setInt16(pos, sample, true); // write 16-bit sample
        pos += 2;
      }
      offset++; // next source sample
    }

    // create Blob
    return new Blob([buffer], { type: "audio/wav" });

    function setUint16(data: any) {
      view.setUint16(pos, data, true);
      pos += 2;
    }

    function setUint32(data: any) {
      view.setUint32(pos, data, true);
      pos += 4;
    }
  }

问题是,如果我连接两个音频(录制的音频和生成的无声音频),实际上只使用第一个通过的音频

const audioBlob = new Blob([silent, ...audioChunks], { type: "audio/*" });
这只输出无声音频,但首先传递 audioChunks,将导致录制的音频没有无声一个)。

javascript browser blob mediarecorder recording
© www.soinside.com 2019 - 2024. All rights reserved.