AudioKit输出更改为耳麦

问题描述 投票:0回答:1

我在我的应用程序中实现了AudioKit“麦克风分析”示例https://audiokit.io/examples/MicrophoneAnalysis/

我想分析麦克风的输入频率,然后弹奏接近所确定频率的正确音符。

通常,声音输出是连接到我的iPhone的扬声器或蓝牙设备,但是在实施了“麦克风分析”示例后,声音输出更改为iPhone顶部的小扬声器,通常在接到电话时使用。如何像以前一样切换到“普通”扬声器或连接的蓝牙设备?

var mic: AKMicrophone!
var tracker: AKFrequencyTracker!
var silence: AKBooster!


func initFrequencyTracker() {
        AKSettings.audioInputEnabled = true
        mic = AKMicrophone()
        tracker = AKFrequencyTracker(mic)
        silence = AKBooster(tracker, gain: 0)
    }

    func deinitFrequencyTracker() {
        plotTimer.invalidate()
        do {
            try AudioKit.stop()
            AudioKit.output = nil
        } catch {
            print(error)
        }
    }

    func initPlotTimer() {
        AudioKit.output = silence
        do {
            try AudioKit.start()
        } catch {
            AKLog("AudioKit did not start!")
        }
        setupPlot()
        plotTimer = Timer.scheduledTimer(timeInterval: 0.1, target: self, selector: #selector(updatePlotUI), userInfo: nil, repeats: true)
    }

    func setupPlot() {
        let plot = AKNodeOutputPlot(mic, frame: audioInputPlot.bounds)
        plot.translatesAutoresizingMaskIntoConstraints = false
        plot.alpha = 0.3
        plot.plotType = .rolling
        plot.shouldFill = true
        plot.shouldCenterYAxis = false
        plot.shouldMirror = true
        plot.color = UIColor(named: uiFarbe)
        audioInputPlot.addSubview(plot)

        // Pin the AKNodeOutputPlot to the audioInputPlot
        var constraints = [plot.leadingAnchor.constraint(equalTo: audioInputPlot.leadingAnchor)]
        constraints.append(plot.trailingAnchor.constraint(equalTo: audioInputPlot.trailingAnchor))
        constraints.append(plot.topAnchor.constraint(equalTo: audioInputPlot.topAnchor))
        constraints.append(plot.bottomAnchor.constraint(equalTo: audioInputPlot.bottomAnchor))
        constraints.forEach { $0.isActive = true }
    }

    @objc func updatePlotUI() {
        if tracker.amplitude > 0.1 {
            let trackerFrequency = Float(tracker.frequency)

            guard trackerFrequency < 7_000 else {
                // This is a bit of hack because of modern Macbooks giving super high frequencies
                return
            }

            var frequency = trackerFrequency
            while frequency > Float(noteFrequencies[noteFrequencies.count - 1]) {
                frequency /= 2.0
            }
            while frequency < Float(noteFrequencies[0]) {
                frequency *= 2.0
            }

            var minDistance: Float = 10_000.0
            var index = 0

            for i in 0..<noteFrequencies.count {
                let distance = fabsf(Float(noteFrequencies[i]) - frequency)
                if distance < minDistance {
                    index = i
                    minDistance = distance
                }
            }
            //                let octave = Int(log2f(trackerFrequency / frequency))

            frequencyLabel.text = String(format: "%0.1f", tracker.frequency)

            if frequencyTranspose(note: notesToTanspose[index]) != droneLabel.text {
                note = frequencyTranspose(note: notesToTanspose[index])
                droneLabel.text = note
                DispatchQueue.main.asyncAfter(deadline: .now() + 0.03, execute: {
                    self.prepareSinglePlayerFirstForStart(note: self.note)
                    self.startSinglePlayer()
                })
            }
        }
    }

    func frequencyTranspose(note: String) -> String {
        var indexNote = notesToTanspose.firstIndex(of: note)!
        let chosenInstrument = UserDefaults.standard.object(forKey: "whichInstrument") as! String
        if chosenInstrument == "Bb" {
            if indexNote + 2 >= notesToTanspose.count {
                indexNote -= 12
            }
            return notesToTanspose[indexNote + 2]
        } else if chosenInstrument == "Eb" {
            if indexNote - 3 < 0 {
                indexNote += 12
            }
            return notesToTanspose[indexNote - 3]
        } else {
            return note
        }
    }
audiokit
1个回答
0
投票

控制会话设置是一种好习惯,因此,请在您的应用程序中创建一个方法以在初始化期间进行处理。

下面,有一个示例,其中我设置了类别和所需的选项:

func start() {
    do {
        let session = AVAudioSession.sharedInstance()
        try session.setCategory(.playAndRecord, options: .defaultToSpeaker)
        try session.setActive(true, options: .notifyOthersOnDeactivation)
        try session.overrideOutputAudioPort(AVAudioSession.PortOverride.speaker)
        try AudioKit.start()
    } catch {
      // your error handler
    }
}

您可以调用方法start,在其中调用initPlotTimer中的AudioKit.Start()。

上面的示例使用的是AVAudioSession,我相信这是AKSettings的包装(请随时编辑我的答案,以免误导未来的读者,因为我现在不在看AudioKit源代码)。

现在AVAudioSession公开了,让我们继续使用AudioKit提供的方法,因为这就是您要处理的。

这里是使用AKSettings的另一个示例:

func start() {
        do {
            AKSettings.channelCount = 2
            AKSettings.ioBufferDuration = 0.002
            AKSettings.audioInputEnabled = true
            AKSettings.bufferLength = .medium
            AKSettings.defaultToSpeaker = true
            // check docs for other options and settings

            try AKSettings.setSession(category: .playAndRecord, with: [.defaultToSpeaker, .allowBluetooth])
            try AudioKit.start()
        } catch {
            // your handler
        }
}

[记住,您不必一定要调用它start或运行AudioKit的start方法,我只是公开初始化阶段,以使其对您和其他用例可读。

参考:

https://developer.apple.com/documentation/avfoundation/avaudiosession/categoryoptions

https://audiokit.io/docs/Classes/AKSettings.html

© www.soinside.com 2019 - 2024. All rights reserved.