使用Swift 3中的Audio Queue获取麦克风输入

问题描述 投票:0回答:2

我正在开发一个通过内置麦克风录制语音并将其发送到服务器的应用程序。所以我需要在录制时从麦克风获取字节流。

谷歌搜索和堆栈溢出相当长一段时间后,我想我发现它应该如何工作,但事实并非如此。我认为使用音频队列可能是要走的路。

这是我到目前为止尝试的内容:

func test() {
    func callback(_ a :UnsafeMutableRawPointer?, _ b : AudioQueueRef, _ c :AudioQueueBufferRef, _ d :UnsafePointer<AudioTimeStamp>, _ e :UInt32, _ f :UnsafePointer<AudioStreamPacketDescription>?) {
        print("test")
    }

    var inputQueue: AudioQueueRef? = nil

    var aqData = AQRecorderState(
        mDataFormat: AudioStreamBasicDescription(
            mSampleRate: 16000,
            mFormatID: kAudioFormatLinearPCM,
            mFormatFlags: 0,
            mBytesPerPacket: 2,
            mFramesPerPacket: 1,     // Must be set to 1 for uncomressed formats
            mBytesPerFrame: 2,
            mChannelsPerFrame: 1,    // Mono recording
            mBitsPerChannel: 2 * 8,  // 2 Bytes
            mReserved: 0),  // Must be set to 0 according to https://developer.apple.com/reference/coreaudio/audiostreambasicdescription
        mQueue: inputQueue!,
        mBuffers: [AudioQueueBufferRef](),
        bufferByteSize: 32,
        mCurrentPacket: 0,
        mIsRunning: true)

    var error = AudioQueueNewInput(&aqData.mDataFormat,
                                   callback,
                                   nil,
                                   nil,
                                   nil,
                                   0,
                                   &inputQueue)
    AudioQueueStart(inputQueue!, nil)
}

它编译并启动应用程序,但只要我调用test(),我就会得到一个异常:

致命错误:在展开Optional值时意外发现nil

例外是由

mQueue: inputQueue!

我理解为什么会发生这种情况(inputQueue没有值)但我不知道如何正确初始化inputQueue。问题是,对于Swift用户来说,音频队列的文档很少,而且我没有在互联网上找到任何有用的例子。

谁能告诉我我做错了什么?

ios swift audio-recording audioqueue
2个回答
3
投票

在使用之前使用AudioQueueNewInput(...)(或输出)初始化音频队列:

let sampleRate = 16000
let numChannels = 2
var inFormat = AudioStreamBasicDescription(
        mSampleRate:        Double(sampleRate),
        mFormatID:          kAudioFormatLinearPCM,
        mFormatFlags:       kAudioFormatFlagsNativeFloatPacked,
        mBytesPerPacket:    UInt32(numChannels * MemoryLayout<UInt32>.size),
        mFramesPerPacket:   1,
        mBytesPerFrame:     UInt32(numChannels * MemoryLayout<UInt32>.size),
        mChannelsPerFrame:  UInt32(numChannels),
        mBitsPerChannel:    UInt32(8 * (MemoryLayout<UInt32>.size)),
        mReserved:          UInt32(0)

var inQueue: AudioQueueRef? = nil
AudioQueueNewInput(&inFormat, callback, nil, nil, nil, 0, &inQueue)

var aqData = AQRecorderState(
    mDataFormat:    inFormat, 
    mQueue:         inQueue!, // inQueue is initialized now and can be unwrapped
    mBuffers: [AudioQueueBufferRef](),
    bufferByteSize: 32,
    mCurrentPacket: 0,
    mIsRunning:     true)

Apples Documentation中查找详细信息


0
投票

我们项目的代码工作正常:

AudioBuffer * buff; 
AudioQueueRef queue;
AudioStreamBasicDescription  fmt = { 0 };


static void HandleInputBuffer (
                               void                                 *aqData,
                               AudioQueueRef                        inAQ,
                               AudioQueueBufferRef                  inBuffer,
                               const AudioTimeStamp                 *inStartTime,
                               UInt32                               inNumPackets,
                               const AudioStreamPacketDescription   *inPacketDesc

                               ) {

 }



- (void) initialize  {


    thisClass = self;

    __block struct AQRecorderState aqData;

    NSError * error;

    fmt.mFormatID         = kAudioFormatLinearPCM; 
    fmt.mSampleRate       = 44100.0;               
    fmt.mChannelsPerFrame = 1;                     
    fmt.mBitsPerChannel   = 16;                    
    fmt.mChannelsPerFrame = 1;
    fmt.mFramesPerPacket  = 1;
    fmt.mBytesPerFrame = sizeof (SInt16);
    fmt.mBytesPerPacket = sizeof (SInt16);


    fmt.mFormatFlags =  kLinearPCMFormatFlagIsSignedInteger  | kLinearPCMFormatFlagIsPacked;




    OSStatus status = AudioQueueNewInput (                              // 1
                        &fmt,                          // 2
                        HandleInputBuffer,                            // 3
                        &aqData,                                      // 4
                        NULL,                                         // 5
                        kCFRunLoopCommonModes,                        // 6
                        0,                                            // 7
                        &queue                                // 8
                        );



    AudioQueueBufferRef  buffers[kNumberBuffers];
    UInt32 bufferByteSize = kSamplesSize;
    for (int i = 0; i < kNumberBuffers; ++i) {           // 1
        OSStatus allocateStatus;
        allocateStatus =  AudioQueueAllocateBuffer (                       // 2
                                  queue,                               // 3
                                  bufferByteSize,                       // 4
                                  &buffers[i]                          // 5
                                  );
        OSStatus  enqueStatus;
        NSLog(@"allocateStatus = %d" , allocateStatus);
        enqueStatus =   AudioQueueEnqueueBuffer (                        // 6
                                 queue,                               // 7
                                 buffers[i],                          // 8
                                 0,                                           // 9
                                 NULL                                         // 10
                                 );
        NSLog(@"enqueStatus = %d" , enqueStatus);
    }



    AudioQueueStart (                                    // 3
                     queue,                                   // 4
                     NULL                                             // 5
                     );


}
© www.soinside.com 2019 - 2024. All rights reserved.