有没有办法在 SFSpeechAudioBufferRecognitionRequest 中重置缓冲区?

问题描述 投票:0回答:0

我有以下记录功能

func record(sketchVM: SketchViewModel, isRecording: Binding<Bool>) {

    // MARK: 1. Create a recognizer.

    guard let recognizer = SFSpeechRecognizer(), recognizer.isAvailable else {
        handleError(withMessage: "Speech recognizer not available.")
        return
    }

    // MARK: 2. Create a speech recognition request.

    recognitionRequest = SFSpeechAudioBufferRecognitionRequest()
    guard let recognitionRequest = recognitionRequest else { return }
    recognitionRequest.shouldReportPartialResults = true

    recognizer.recognitionTask(with: recognitionRequest) { result, error in
        guard error == nil else { self.handleError(withMessage: error!.localizedDescription); return }
        guard let result = result else { return }

        print("got a new result: \(result.bestTranscription.formattedString), final : \(result.isFinal)")

    }

    // MARK: 3. Create a recording and classification pipeline.

    audioEngine = AVAudioEngine()
    guard let audioEngine = audioEngine else { return }

    inputNode = audioEngine.inputNode
    guard let inputNode = inputNode else { return }

    let recordingFormat = inputNode.outputFormat(forBus: 0)
    inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { buffer, _ in
        self.recognitionRequest?.append(buffer)
    }

    // Build the graph.
    audioEngine.prepare()

    // MARK: 4. Start recognizing speech.

    do {
        // Activate the session.
        audioSession = AVAudioSession.sharedInstance()
        guard let audioSession = audioSession else { return }
        try audioSession.setActive(false)
        try audioSession.setCategory(.record, mode: .default)
        try audioSession.setActive(true, options: .notifyOthersOnDeactivation)

        // Start the processing pipeline.
        try audioEngine.start()
        Task { @MainActor in
            isRecording.wrappedValue = true
        }
    } catch {
        handleError(withMessage: error.localizedDescription)
    }
}

它工作正常,除了我无法重置 result.bestTranscription 的输出。它不断附加到以前录制的声音。用户想要重新开始录制,但我们不一定要推倒所有对象并停止录制。

有没有办法让所有的会话、请求、引擎保持原样,只告诉它清除保存的录音?

swift avfoundation avaudiosession
© www.soinside.com 2019 - 2024. All rights reserved.