从IOS发送的带有Android剪辑的声音剪辑

问题描述 投票:2回答:1

我正在从audioUnit的IOS中记录音频,用opus编码字节,然后通过UDP将其发送到android端。 问题是声音在播放中被剪辑掉。我还通过将原始数据从IOS发送到Android来测试声音,并且播放效果完美。

我的AudioSession代码是

      try audioSession.setCategory(.playAndRecord, mode: .voiceChat, options: [.defaultToSpeaker])
        try audioSession.setPreferredIOBufferDuration(0.02)
        try audioSession.setActive(true)

我的录音回叫代码是:

func performRecording(
    _ ioActionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>,
    inTimeStamp: UnsafePointer<AudioTimeStamp>,
    inBufNumber: UInt32,
    inNumberFrames: UInt32,
    ioData: UnsafeMutablePointer<AudioBufferList>) -> OSStatus
 {
var err: OSStatus = noErr

err = AudioUnitRender(audioUnit!, ioActionFlags, inTimeStamp, 1, inNumberFrames, ioData)

if let mData = ioData[0].mBuffers.mData {
    let ptrData = mData.bindMemory(to: Int16.self, capacity: Int(inNumberFrames))
    let bufferPtr = UnsafeBufferPointer(start: ptrData, count: Int(inNumberFrames))

    count += 1
    addedBuffer += Array(bufferPtr)

    if count == 2 {

        let _ = TPCircularBufferProduceBytes(&circularBuffer, addedBuffer, UInt32(addedBuffer.count * 2))

        count = 0
        addedBuffer = []

        let buffer = TPCircularBufferTail(&circularBuffer, &availableBytes)

        memcpy(&targetBuffer, buffer, Int(min(bytesToCopy, Int(availableBytes))))

        TPCircularBufferConsume(&circularBuffer, UInt32(min(bytesToCopy, Int(availableBytes))))

        self.audioRecordingDelegate(inTimeStamp.pointee.mSampleTime / Double(16000), targetBuffer)


    }
}
return err;
 }

在这里,我得到的inNumberOfFrames几乎是341,并且我将2个数组附加在一起以获得Android更大的帧大小(需要640),但我仅在TPCircularBuffer的帮助下编码640。

func gotSomeAudio(timeStamp: Double, samples: [Int16]) {

samples.count))



    let encodedData = opusHelper?.encodeStream(of: samples)
OPUS_SET_BITRATE_REQUEST)


    let myData = encodedData!.withUnsafeBufferPointer {
        Data(buffer: $0)
    }

    var protoModel = ProtoModel()
    seqNumber += 1
    protoModel.sequenceNumber = seqNumber
    protoModel.timeStamp = Date().currentTimeInMillis()
    protoModel.payload = myData

    DispatchQueue.global().async {
        do {
            try self.tcpClient?.send(data: protoModel)
        } catch {
            print(error.localizedDescription)
        }
    }
    let diff = CFAbsoluteTimeGetCurrent() - start
                             print("Time diff is \(diff)")
}

在上面的代码中,我是opus编码640 frameSize并将其添加到ProtoBuf有效负载并通过UDP发送。

在Android方面,我正在解析Protobuf并解码640帧大小并使用AudioTrack播放。Android方面没有问题,因为我仅使用Android即可录制和播放声音,但是当我通过IOS和通过Android Side玩。

请不要建议通过设置“首选IO缓冲区持续时间”来增加frameSize。我想这样做而不更改它。

https://stackoverflow.com/a/57873492/12020007很有帮助。

ios swift core-audio audiounit opus
1个回答
0
投票

您的代码正在音频回调内执行Swift内存分配(数组串联)和Swift方法调用(您的录音委托)。 Apple(在有关音频的WWDC会话中)建议not在实时音频回调上下文中进行任何内存分配或方法调用(尤其是在请求较短的“首选IO缓冲区持续时间”时)。

© www.soinside.com 2019 - 2024. All rights reserved.