使用 AVSampleBufferAudioRenderer 播放流式 PCM 音频数据包(从 Opus 解码)

问题描述 投票:0回答:1

我正在开发一个 iOS 项目,我正在接收 Opus 音频数据包并尝试使用 AVSampleBufferAudioRenderer 播放它们。现在我使用的是 Opus 自己的解码器,所以最终我只需要获取解码后的 PCM 数据包即可播放。从上到下的整个过程并没有很好的记录,但我想我已经很接近了。这是我迄今为止正在使用的代码(经过编辑,并使用一些硬编码值以简化)。

static AVSampleBufferAudioRenderer* audioRenderer;
static AVSampleBufferRenderSynchronizer* renderSynchronizer;

int samplesPerFrame = 240;
int channelCount    = 2;
int sampleRate      = 48000;
int streams         = 1;
int coupledStreams  = 1;
char mapping[8] = ['\0','\x01','\0','\0','\0','\0','\0','\0'];

// called when the stream is about to start
void AudioInit()
{
    renderSynchronizer = [[AVSampleBufferRenderSynchronizer alloc] init];
    audioRenderer = [[AVSampleBufferAudioRenderer alloc] init];
    [renderSynchronizer addRenderer:audioRenderer];
    
    int decodedPacketSize = samplesPerFrame * sizeof(short) * channelCount; // 240 samples per frame * 2 channels
    decodedPacketBuffer = SDL_malloc(decodedPacketSize);
    
    int err;
    opusDecoder = opus_multistream_decoder_create(sampleRate,       // 48000
                                                  channelCount,     // 2
                                                  streams,          // 1
                                                  coupledStreams,   // 1
                                                  mapping,
                                                  &err);

    renderSynchronizer.rate = 1.0;
}

// called every X milliseconds with a new packet of audio data to play, IF there's audio. (while testing, X = 5)
void AudioDecodeAndPlaySample(char* sampleData, int sampleLength)
{
    // decode the packet from Opus to (I think??) Linear PCM
    int numSamples;
    numSamples = opus_multistream_decode(opusDecoder,
                                         (unsigned char *)sampleData,
                                         sampleLength,
                                         (short*)decodedPacketBuffer,
                                         samplesPerFrame, // 240
                                         0);

    int bufferSize = sizeof(short) * numSamples * channelCount; // 240 samples * 2 channels

    // LPCM stream description
    AudioStreamBasicDescription asbd = {
        .mFormatID          = kAudioFormatLinearPCM,
        .mFormatFlags       = kLinearPCMFormatFlagIsSignedInteger,
        .mBytesPerPacket    = bufferSize,
        .mFramesPerPacket   = numSamples, // 240
        .mBytesPerFrame     = bufferSize / numSamples,
        .mChannelsPerFrame  = channelCount, // 2
        .mBitsPerChannel    = 16,
        .mSampleRate        = sampleRate // 48000
    };
    
    // audio format description wrapper around asbd
    CMAudioFormatDescriptionRef audioFormatDesc;
    OSStatus status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault,
                                                     &asbd,
                                                     0,
                                                     NULL,
                                                     0,
                                                     NULL,
                                                     NULL,
                                                     &audioFormatDesc);
    
    // data block to store decoded packet into
    CMBlockBufferRef blockBuffer;
    status = CMBlockBufferCreateWithMemoryBlock(kCFAllocatorDefault,
                                                decodedPacketBuffer,
                                                bufferSize,
                                                kCFAllocatorNull,
                                                NULL,
                                                0,
                                                bufferSize,
                                                0,
                                                &blockBuffer);
    
    // data block converted into a sample buffer
    CMSampleBufferRef sampleBuffer;
    status = CMAudioSampleBufferCreateReadyWithPacketDescriptions(kCFAllocatorDefault,
                                                                  blockBuffer,
                                                                  audioFormatDesc,
                                                                  numSamples,
                                                                  kCMTimeZero,
                                                                  NULL,
                                                                  &sampleBuffer);
    
    
    // queueing sample buffer onto audio renderer
    [audioRenderer enqueueSampleBuffer:sampleBuffer];
}

AudioDecodeAndPlaySample
函数来自我正在使用的库,正如评论所说,每次调用一个包含大约5毫秒样本的数据包(并且,需要注意的是,not会被调用吗?如果没有声音)。

这里有很多地方我可能是错的 - 我认为我是正确的,opus 解码器(文档)解码为线性 PCM(交错),我希望我正确构建

AudioStreamBasicDescription
。我绝对不知道如何处理
CMAudioSampleBufferCreateReadyWithPacketDescriptions
中的PTS(演示时间戳) - 我已经输入了零,希望它能尽快播放,但我不知道我是否已经得到了对吧。

我已经在各处运行此代码并进行了错误检查(为简单起见,在此处进行了编辑),直到

[audioRenderer enqueueSampleBuffer:sampleBuffer]
之后,当
audioRenderer.error
报告未知错误时,我才收到任何错误。很明显它对我给它的任何东西都不满意。我见过的大多数
enqueueSampleBuffer
代码示例都将其包装在带有调度队列的
requestMediaDataWhenReady
中,我也尝试过但无济于事。 (我怀疑这对于功能来说是更好的实践,而不是必要的,所以我只是想先让最简单的情况起作用;但如果有必要,我可以把它放回去。)

如果您对 Swift 更满意,请随时使用 Swift 进行回复,我可以使用其中任何一个。 (无论喜欢与否,我在这里都坚持使用 Objective-C。🙂)

ios objective-c avfoundation core-audio audiotoolbox
1个回答
0
投票

看来您的 iOS 音频项目走在正确的轨道上。您解码 Opus 音频数据并尝试使用 AVSampleBufferAudioRenderer 播放它的方法从根本上来说是合理的,但在您的代码中需要考虑一些潜在的问题和改进。

// Global variable to keep track of the current PTS
CMTime currentPTS = kCMTimeZero;

void AudioDecodeAndPlaySample(char* sampleData, int sampleLength)
{
    // [Existing decoding logic]

    // Update PTS
    CMTime frameDuration = CMTimeMake(numSamples, sampleRate);
    currentPTS = CMTimeAdd(currentPTS, frameDuration);

    // [Existing LPCM stream description logic]

    // [Existing sample buffer creation logic]

    // Queueing sample buffer onto audio renderer within requestMediaDataWhenReady block
    [audioRenderer requestMediaDataWhenReadyOnQueue:dispatch_get_main_queue() usingBlock:^{
        if ([audioRenderer isReadyForMoreMediaData]) {
            CMSampleBufferSetOutputPresentationTimeStamp(sampleBuffer, currentPTS);
            [audioRenderer enqueueSampleBuffer:sampleBuffer];
            CFRelease(sampleBuffer); // Don't forget to release the sample buffer
        }
    }];
}
© www.soinside.com 2019 - 2024. All rights reserved.