我有两段音频:(1)15个心跳片段和(2)15个频率噪声。频率噪声只有 1 秒,持续时间比心跳剪辑(动态生成,长度可变)短。
我能够通过不同的扬声器成功播放这两首曲子,但只能连续播放(一次一个),而不是并行(一起)。我通过
sounddevice
这样做如下:
import sounddevice as sd
import sound file as sf
for i, clip in enumerate(user_heartbeats):
clip.export("temp.wav", format="wav")
audio_data, sample_rate = sf.read("temp.wav", dtype='float32')
sd.play(audio_data, sample_rate, device=heartbeat_speaker_id)
sd.wait()
t = np.linspace(0.0, freq_duration_sec, int(freq_duration_sec * sampling_freq), endpoint=False)
waveform = (volume) * np.sin(2.0 * np.pi * frequencies_to_play[i] * t)
sd.play(waveform, sampling_freq, device=frequency_speaker_id)
sd.wait()
我查看了
threading
库可能同时播放,但我无法获得预期的结果:
def play_audio_pairing(audio, delay, device):
# load heartbeat sound
audio_data, sample_rate = sf.read(audio, dtype='float32')
if delay > 0:
padding = np.zeros((int(delay * sample_rate), audio_data.shape[1]))
audio_data = np.concatenate((padding, audio_data), axis=0)
# Play audio file on the specified sound device
sd.play(audio_data, sample_rate, device=device)
sd.wait()
max_audio_duration = max([sf.info(file).duration for file in heartbeat_sounds + frequencies_to_play])
threads = []
for i in range(len(heartbeat_sounds)):
heartbeat_sound = heartbeat_sounds[i]
frequency_noise = frequencies_to_play[i % len(frequencies_to_play)]
heartbeat_delay = max_audio_duration - sf.info(heartbeat_sound).duration
frequency_delay = max_audio_duration - sf.info(frequency_noise).duration
thread = threading.Thread(target=play_audio_pairing, args=(heartbeat_sound, heartbeat_delay - frequency_delay, heartbeat_speaker_id))
threads.append(thread)
thread.start()
thread = threading.Thread(target=play_audio_pairing, args=(frequency_noise, 0, frequency_speaker_id))
threads.append(thread)
thread.start()
for thread in threads:
thread.join()
这只播放前 3 个频率,我听不到心跳声。
关于如何合并这些以便每个频率噪声开始并在不同的扬声器中播放其相应的心跳声的任何帮助都将非常有帮助。