我们的pogram采用声音文件,将其划分为帧并使用零交叉率和短期能量找到有声无声帧。在某些时候,算法会找到所有浊音帧的ID。我们想为浊音帧创建良好的图形表示,但我们没有成功使用绘图在原始数据上标记浊音帧。现在我们只能成功显示没有清音的浊音帧(在图的开头):
但我们想做这样的事情(用图形编辑器制作),这样你就可以在原始数据图上看到有声帧了):
We want image to look like this
码:
close all; clear all;
% read sound
[data, fs] = audioread('shee_mono.wav');
% normalize data
data = data / abs(max(data));
f_d = 0.025
%[frames, ~] = vec2frames( data, Nw, Ns, 'rows', @hamming, false);
frames = framing(data, fs, f_d);
ZCR_values_per_frame = ZCR(frames, f_d, fs, data);
f_energy_vector = STECalc(frames);
ste_threshold = 0.01;
zcr_threshold = mean(ZCR_values_per_frame); %take average ZCR as threshold
voiced_id = find_voiced_id(ZCR_values_per_frame, f_energy_vector, zcr_threshold, ste_threshold);
unvoiced_id = reshape(1:size(frames), 1, []); %create vector filled with numbers 1...96 in order
unvoiced_id = setdiff(unvoiced_id, voiced_id); %change vector to be every frame that is unvoiced
fr_unvoiced = frames(unvoiced_id,:);
data_unvoiced = reshape(fr_unvoiced',1,[]);
fr_voiced = frames(voiced_id,:);
data_voiced = reshape(fr_voiced',1,[]);
figure
plot(data); hold on;
%plot(data_unvoiced, 'b');
%plot(data_voiced, 'g');
sound(data_voiced, fs);
title ("Blue - original data, green - voiced areas after unvoiced deleted");
[ voiced_timing, unvoiced_timing ] = return_voiced_unvoiced_timings(voiced_id, unvoiced_id, f_d, frames);
附:对不起,如果有一些错误。英语不是我的母语
您可以做的是根据样本频率和样本数量制作时间向量tv
:
tv = (0:numel(data)-1)/fs;
然后,你可以使用voiced_id
(它包含你想获得的声音片段的索引,查看你的Github repo)来获得与你想要的数据的时间戳对应的时间向量:
tv_voice = tv(voiced_id);
然后绘制,使用时间向量作为x值:
plot(tv,data, 'b');
plot(tv_voice, data_voiced, 'g');