使用Java SDK将音频从mic流式传输到IBM Watson SpeechToText Web服务

问题描述 投票:5回答:2

尝试使用Java SDK将来自麦克风的连续音频流直接发送到IBM Watson SpeechToText Web服务。随分发(RecognizeUsingWebSocketsExample)提供的示例之一显示了如何将.WAV格式的文件流式传输到服务。但是,.WAV文件要求提前指定它们的长度,因此一次只将一个缓冲区附加到文件的简单方法是不可行的。

似乎SpeechToText.recognizeUsingWebSocket可以采取一个流,但喂它一个AudioInputStream的实例似乎似乎没有建立连接已建立,但即使RecognizeOptions.interimResults(true)没有返回成绩单。

public class RecognizeUsingWebSocketsExample {
private static CountDownLatch lock = new CountDownLatch(1);

public static void main(String[] args) throws FileNotFoundException, InterruptedException {
SpeechToText service = new SpeechToText();
service.setUsernameAndPassword("<username>", "<password>");

AudioInputStream audio = null;

try {
    final AudioFormat format = new AudioFormat(16000, 16, 1, true, false);
    DataLine.Info info = new DataLine.Info(TargetDataLine.class, format);
    TargetDataLine line;
    line = (TargetDataLine)AudioSystem.getLine(info);
    line.open(format);
    line.start();
    audio = new AudioInputStream(line);
    } catch (LineUnavailableException e) {
        // TODO Auto-generated catch block
        e.printStackTrace();
    }

RecognizeOptions options = new RecognizeOptions.Builder()
    .continuous(true)
    .interimResults(true)
    .contentType(HttpMediaType.AUDIO_WAV)
    .build();

service.recognizeUsingWebSocket(audio, options, new BaseRecognizeCallback() {
  @Override
  public void onTranscription(SpeechResults speechResults) {
    System.out.println(speechResults);
    if (speechResults.isFinal())
      lock.countDown();
  }
});

lock.await(1, TimeUnit.MINUTES);
}
}

任何帮助将不胜感激。

-rg

以下是基于德语评论的更新(感谢您)。

我能够使用javaFlacEncode将从麦克风到达的WAV流转换为FLAC流并将其保存到临时文件中。与创建时固定大小的WAV音频文件不同,可以轻松附加FLAC文件。

    WAV_audioInputStream = new AudioInputStream(line);
    FileInputStream FLAC_audioInputStream = new FileInputStream(tempFile);

    StreamConfiguration streamConfiguration = new StreamConfiguration();
    streamConfiguration.setSampleRate(16000);
    streamConfiguration.setBitsPerSample(8);
    streamConfiguration.setChannelCount(1);

    flacEncoder = new FLACEncoder();
    flacOutputStream = new FLACFileOutputStream(tempFile);  // write to temp disk file

    flacEncoder.setStreamConfiguration(streamConfiguration);
    flacEncoder.setOutputStream(flacOutputStream);

    flacEncoder.openFLACStream();

    ...
    // convert data
    int frameLength = 16000;
    int[] intBuffer = new int[frameLength];
    byte[] byteBuffer = new byte[frameLength];

    while (true) {
        int count = WAV_audioInputStream.read(byteBuffer, 0, frameLength);
        for (int j1=0;j1<count;j1++)
            intBuffer[j1] = byteBuffer[j1];

        flacEncoder.addSamples(intBuffer, count);
        flacEncoder.encodeSamples(count, false);  // 'false' means non-final frame
    }

    flacEncoder.encodeSamples(flacEncoder.samplesAvailableToEncode(), true);  // final frame
    WAV_audioInputStream.close();
    flacOutputStream.close();
    FLAC_audioInputStream.close();

添加任意数量的帧后,可以分析生成的文件(使用curlrecognizeUsingWebSocket()),没有任何问题。但是,recognizeUsingWebSocket()会在到达FLAC文件末尾时返回最终结果,即使文件的最后一帧可能不是最终的(即在encodeSamples(count, false)之后)。

我希望recognizeUsingWebSocket()阻止,直到最后一帧被写入文件。实际上,这意味着分析在第一帧之后停止,因为分析第一帧比收集第二帧花费的时间更少,因此在返回结果时,到达文件的结尾。

这是从Java中用麦克风实现流式音频的正确方法吗?似乎是一个常见的用例。


这是对RecognizeUsingWebSocketsExample的修改,其中包含了Daniel的一些建议。它使用PCM内容类型(作为String传递,与帧大小一起传递),并尝试发出音频流的结束信号,尽管不是非常成功的。

和以前一样,建立连接,但永远不会调用识别回调。关闭流似乎也不会被解释为音频的结束。我一定是在误解这里的东西......

    public static void main(String[] args) throws IOException, LineUnavailableException, InterruptedException {

    final PipedOutputStream output = new PipedOutputStream();
    final PipedInputStream  input  = new PipedInputStream(output);

  final AudioFormat format = new AudioFormat(16000, 8, 1, true, false);
  DataLine.Info info = new DataLine.Info(TargetDataLine.class, format);
  final TargetDataLine line = (TargetDataLine)AudioSystem.getLine(info);
  line.open(format);
  line.start();

    Thread thread1 = new Thread(new Runnable() {
        @Override
        public void run() {
            try {
              final int MAX_FRAMES = 2;
              byte buffer[] = new byte[16000];
              for(int j1=0;j1<MAX_FRAMES;j1++) {  // read two frames from microphone
              int count = line.read(buffer, 0, buffer.length);
              System.out.println("Read audio frame from line: " + count);
              output.write(buffer, 0, buffer.length);
              System.out.println("Written audio frame to pipe: " + count);
              }
              /** no need to fake end-of-audio;  StopMessage will be sent 
              * automatically by SDK once the pipe is drained (see WebSocketManager)
              // signal end of audio; based on WebSocketUploader.stop() source
              byte[] stopData = new byte[0];
              output.write(stopData);
              **/
            } catch (IOException e) {
            }
        }
    });
    thread1.start();

  final CountDownLatch lock = new CountDownLatch(1);

  SpeechToText service = new SpeechToText();
  service.setUsernameAndPassword("<username>", "<password>");

  RecognizeOptions options = new RecognizeOptions.Builder()
  .continuous(true)
  .interimResults(false)
  .contentType("audio/pcm; rate=16000")
  .build();

  service.recognizeUsingWebSocket(input, options, new BaseRecognizeCallback() {
    @Override
    public void onConnected() {
      System.out.println("Connected.");
    }
    @Override
    public void onTranscription(SpeechResults speechResults) {
    System.out.println("Received results.");
      System.out.println(speechResults);
      if (speechResults.isFinal())
        lock.countDown();
    }
  });

  System.out.println("Waiting for STT callback ... ");

  lock.await(5, TimeUnit.SECONDS);

  line.stop();

  System.out.println("Done waiting for STT callback.");

}

Dani,我为WebSocketManager(附带SDK)提供了源代码,并使用明确的sendMessage()有效载荷替换了对StopMessage的调用,如下所示:

        /**
     * Send input steam.
     *
     * @param inputStream the input stream
     * @throws IOException Signals that an I/O exception has occurred.
     */
    private void sendInputSteam(InputStream inputStream) throws IOException {
      int cumulative = 0;
      byte[] buffer = new byte[FOUR_KB];
      int read;
      while ((read = inputStream.read(buffer)) > 0) {
        cumulative += read;
        if (read == FOUR_KB) {
          socket.sendMessage(RequestBody.create(WebSocket.BINARY, buffer));
        } else {
          System.out.println("completed sending " + cumulative/16000 + " frames over socket");
          socket.sendMessage(RequestBody.create(WebSocket.BINARY, Arrays.copyOfRange(buffer, 0, read)));  // partial buffer write
          System.out.println("signaling end of audio");
          socket.sendMessage(RequestBody.create(WebSocket.TEXT, buildStopMessage().toString()));  // end of audio signal

        }

      }
      inputStream.close();
    }

sendMessage()选项(发送0长度二进制内容或发送停止文本消息)似乎都不起作用。来电代码与上述相同。结果输出是:

Waiting for STT callback ... 
Connected.
Read audio frame from line: 16000
Written audio frame to pipe: 16000
Read audio frame from line: 16000
Written audio frame to pipe: 16000
completed sending 2 frames over socket
onFailure: java.net.SocketException: Software caused connection abort: socket write error

修订:实际上,从未达到音频结束通话。将最后(部分)缓冲区写入套接字时抛出异常。

为什么连接中止?这通常发生在对等方关闭连接时。

至于第2点):在这个阶段,这些问题中的任何一个都是重要的吗?似乎根本没有启动识别过程......音频是有效的(我将流写入磁盘,并且能够通过从文件中流式传输来识别它,正如我在上面指出的那样)。

此外,在进一步审查WebSocketManager源代码时,onMessage()已经从StopMessage return发送sendInputSteam()(即,当上面的示例中的音频流或管道消失时),因此无需明确调用它。问题肯定发生在音频数据传输完成之前。无论PipedInputStreamAudioInputStream是否作为输入传递,行为都是相同的。在两种情况下发送二进制数据时都会抛出异常。

java speech-to-text ibm-watson
2个回答
6
投票

Java SDK有一个示例并支持此功能。

更新您的pom.xml

 <dependency>
   <groupId>com.ibm.watson.developer_cloud</groupId>
   <artifactId>java-sdk</artifactId>
   <version>3.3.1</version>
 </dependency>

以下是如何收听麦克风的示例。

SpeechToText service = new SpeechToText();
service.setUsernameAndPassword("<username>", "<password>");

// Signed PCM AudioFormat with 16kHz, 16 bit sample size, mono
int sampleRate = 16000;
AudioFormat format = new AudioFormat(sampleRate, 16, 1, true, false);
DataLine.Info info = new DataLine.Info(TargetDataLine.class, format);

if (!AudioSystem.isLineSupported(info)) {
  System.out.println("Line not supported");
  System.exit(0);
}

TargetDataLine line = (TargetDataLine) AudioSystem.getLine(info);
line.open(format);
line.start();

AudioInputStream audio = new AudioInputStream(line);

RecognizeOptions options = new RecognizeOptions.Builder()
  .continuous(true)
  .interimResults(true)
  .timestamps(true)
  .wordConfidence(true)
  //.inactivityTimeout(5) // use this to stop listening when the speaker pauses, i.e. for 5s
  .contentType(HttpMediaType.AUDIO_RAW + "; rate=" + sampleRate)
  .build();

service.recognizeUsingWebSocket(audio, options, new BaseRecognizeCallback() {
  @Override
  public void onTranscription(SpeechResults speechResults) {
    System.out.println(speechResults);
  }
});

System.out.println("Listening to your voice for the next 30s...");
Thread.sleep(30 * 1000);

// closing the WebSockets underlying InputStream will close the WebSocket itself.
line.stop();
line.close();

System.out.println("Fin.");

0
投票

您需要做的是将音频作为文件提供给STT服务,而不是作为无头音频样本流。您只需通过WebSocket提供从麦克风捕获的样本。您需要将内容类型设置为“audio / pcm; rate = 16000”,其中16000是以Hz为单位的采样率。如果您的采样率不同,这取决于麦克风编码音频的方式,您将用您的值替换16000,例如:44100,48000等。

当馈送pcm音频时,STT服务不会停止识别,直到您通过websocket发送空的二进制消息来发出音频结束信号。


查看代码的新版本,我发现了一些问题:

1)通过websocket发送空的二进制消息可以完成信号的音频结束,这不是你正在做的事情。线条

 // signal end of audio; based on WebSocketUploader.stop() source
 byte[] stopData = new byte[0];
 output.write(stopData);

因为它们不会导致发送空的websocket消息,所以没有做任何事情。你可以调用方法“WebSocketUploader.stop()”吗?

  1. 您正在以每个样本8位捕获音频,您应该执行16位以获得足够的排队。此外,您只需要几秒钟的音频,不适合测试。你能把你推送到STT的音频写到一个文件然后用Audacity打开它(使用导入功能)吗?通过这种方式,您可以确保为STT提供的是良好的音频。
© www.soinside.com 2019 - 2024. All rights reserved.