Swift AVFAudio:如何让 AVAudio.installTap() 每秒回调 125 次

问题描述 投票:0回答:1

我正在创建一个应用程序,它从麦克风获取输入并立即通过输出进行回放。我想最终对此进行语音处理,所以我需要一个低延迟的解决方案。然而,我当前的解决方案大约每 100 毫秒回调一次,这对于我的用例来说太慢了。我希望每 8 毫秒访问一次播放缓冲区。

我目前的解决方案是

var audioEngine: AVAudioEngine
    var inputNode: AVAudioInputNode
    var playerNode: AVAudioPlayerNode
    var bufferDuration: AVAudioFrameCount
    
    init() {
        audioEngine = AVAudioEngine()
        inputNode = audioEngine.inputNode
        playerNode = AVAudioPlayerNode()
        bufferDuration = 960// AVAudioFrameCount(352)
    }
    
    func startStreaming() -> Void {
        // Configure the session
        do {
            let audioSession = AVAudioSession.sharedInstance()
            try audioSession.setCategory(.playAndRecord, mode: .voiceChat, options: [.defaultToSpeaker])
            try audioSession.setPreferredSampleRate(96000)
            try audioSession.setPreferredIOBufferDuration(0.008)
            try audioSession.setActive(true)
            try audioSession.overrideOutputAudioPort(.speaker)
        } catch {
            print("Audio Session error: \(error)")
        }
      
    let fmt = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: AVAudioSession.sharedInstance().sampleRate, channels: 2, interleaved: false)
        
     // Set the playerNode to immediately queue/play the recorded buffer
    inputNode.installTap(onBus: 0, bufferSize: bufferDuration, format: fmt) { (buffer: AVAudioPCMBuffer, when: AVAudioTime) in
        // Schedule the buffer for playback
        playerNode.scheduleBuffer(buffer, at: nil, options: [], completionHandler: nil)
    }
        
    // Start the engine
    do {
        audioEngine.attach(playerNode)
        audioEngine.connect(playerNode, to: audioEngine.outputNode, format: fmt/*inputNode.inputFormat(forBus: 0)*/)
            
        try audioEngine.start()
        playerNode.play()
    } catch {
        print("Audio Engine start error: \(error)")
    }
        

我尝试过设置 buffer.frameLength 等选项,但似乎没有什么可以改变回调的频率。我不清楚这个问题是否在于框架不允许这么小的缓冲区,或者是否缺少解决方案。本网站上其他已解决的解决方案提供了不需要低缓冲区大小的推理,但我确实需要一个非常快的解决方案。如果 AVFAudio 无法做到这一点,AudioKit.io 和 Core Audio C API 是否有潜在的解决方案?

ios swift objective-c avfoundation avaudioengine
1个回答
0
投票

尝试直接将输入节点连接到输出节点,并完全丢弃 tap 和

AVAudioPlayerNode
。 (
AVAudioPlayerNode
有什么用?对时间不敏感的东西。)

这对我有用:

let engine = AVAudioEngine()

init() {
    let session = AVAudioSession.sharedInstance()
    try! session.setCategory(.playAndRecord)
    try! session.setPreferredIOBufferDuration(0.008)
    try! session.setActive(true)

    engine.connect(engine.inputNode, to: engine.outputNode, format: nil)
    try! engine.start()
}
© www.soinside.com 2019 - 2024. All rights reserved.