我只是想录制音频并输出mp3.
我不知道我需要做什么才能使这段代码工作: 我正在使用网络音频 api 在渲染器进程中录制音频。这按预期工作。
app.tsx:
let mediaRecorder: MediaRecorder | null = null
let recordedChunks: Blob[] = []
window.api.startVoiceRecording(() => {
navigator.mediaDevices
.getUserMedia({ audio: true })
.then((stream) => {
mediaRecorder = new MediaRecorder(stream)
mediaRecorder.addEventListener('dataavailable', (event) => {
recordedChunks.push(event.data)
})
mediaRecorder.onstop = async (e) => {
const audioData = new Blob(recordedChunks, { type: 'audio/mp3' })
// send audiodata to main to encode the mp3 (chrome doesn't support it)
const buffer = await audioData.arrayBuffer()
window.api.onNewMp3Blob(buffer)
recordedChunks = []
mediaRecorder = null
}
mediaRecorder.start()
})
.catch((err) => {
console.error('Error accessing microphone:', err)
})
})
window.api.stopVoiceRecording(() => {
// if we are recording
if (mediaRecorder) {
mediaRecorder.stop()
console.log('stopped, these are the cunks:' + recordedChunks.length)
}
})
这个数据通过ipc到达main:
voice.ts:
ipcMain.on('c-onNewMp3Blob', (event, args: ArrayBuffer) => {
const mp3encoder = new lamejs.Mp3Encoder(1, 44100, 128) //mono 44.1khz encode to 128kbps
const mp3Data = []
let mp3Tmp = mp3encoder.encodeBuffer(new Int16Array(44100)) //encode mp3
mp3Data.push(Buffer.from(mp3Tmp))
// Get end part of mp3
mp3Tmp = mp3encoder.flush()
// mp3Data contains now the complete mp3Data
mp3Data.push(Buffer.from(mp3Tmp))
// create a buffer from the mp3 data
const buffer = Buffer.concat(mp3Data)
// write the buffer to a file
fs.writeFile(path.join(os.homedir(), 'result.mp3'), buffer, (err) => {
if (err) console.log(err)
})
})
而不是编码
new Int16Array(44100)
,这是一秒钟的沉默,我想编码args(通过ipc传递的ArrayBuffer),这是录制的音频。
这是一个最小的例子:https://github.com/vothvovo/recordtomp3