麦克风,调节音量后才会出现声音

问题描述 投票:0回答:1

在 WebRTC 应用程序中调整音量之前,麦克风没有声音

预期行为: 当麦克风打开时,我希望立即听到声音,而不需要调节音量控制。

实际行为: 目前,在调节音量控制之前,麦克风不会发出声音。调节音量后,声音就如预期听到了。

此问题在不同浏览器中一致出现。 我已确保授予麦克风权限并且浏览器支持必要的 Web 音频 API 功能。

useEffect(() => {
    for (const peer of peers.values()) {
        const audioConsumer = peer.getConsumerByType(SERVICE_TYPE.VOICE);
        if (audioConsumer) {
            if (!gainNodes.current[peer.id.toString()]) {
                peer.resume(SERVICE_TYPE.VOICE);
                const ms = new MediaStream();
                ms.addTrack(audioConsumer.track);
                const audioContext = new (window.AudioContext || window.webkitAudioContext)();
                const gainNode = audioContext.createGain() as GainNode;
                gainNode.gain.value = volume[peer.id.toString()] || 1;
                const source = audioContext.createMediaStreamSource(ms);
                source.connect(gainNode);
                gainNode.connect(audioContext.destination);
                gainNodes.current[peer.id.toString()] = gainNode; 
            }
        }
    }
    return () => {
        for (const key in gainNodes.current) {
            if (gainNodes.current.hasOwnProperty(key)) {
                const gainNode = gainNodes.current[key];
                if (gainNode) {
                    gainNode.disconnect();
                }
            }
        }
        gainNodes.current = {};
    };
}, [peers, volume]);


useEffect(() => {
    const handleVolumeChange = () => {
        const mic = navigator.mediaDevices.getUserMedia({ audio: true });
        mic.then(stream => {
            if (stream.getAudioTracks().length === 0) {
                setVolume(prev => {
                    const newVolume = { ...prev };
                    for (const peer of peers.values()) {
                        newVolume[peer.id.toString()] = 1; 
                    }
                    return newVolume;
                });
                audioRefs.current.forEach(audio => {
                    if (audio) {
                        audio.volume = 1; 
                    }
                });
            } else {
                setVolume(prev => {
                    const newVolume = { ...prev };
                    for (const peer of peers.values()) {
                        if (newVolume[peer.id.toString()] === 0) {
                            newVolume[peer.id.toString()] = 1; 
                        }
                    }
                    return newVolume;
                });
                audioRefs.current.forEach(audio => {
                    if (audio) {
                        audio.volume = 1; 
                    }
                });
            }

            stream.getTracks().forEach(track => {
                track.onmute = () => {
                    setVolume(prev => {
                        const newVolume = { ...prev };
                        for (const peer of peers.values()) {
                            newVolume[peer.id.toString()] = 0; 
                        }
                        return newVolume;
                    });
                    audioRefs.current.forEach(audio => {
                        if (audio) {
                            audio.volume = 0; 
                        }
                    });
                };

                track.onunmute = () => {
                    setVolume(prev => {
                        const newVolume = { ...prev };
                        for (const peer of peers.values()) {
                            if (newVolume[peer.id.toString()] === 0) {
                                newVolume[peer.id.toString()] = 1; 
                            }
                        }
                        return newVolume;
                    });
                    audioRefs.current.forEach(audio => {
                        if (audio) {
                            audio.volume = 1;
                        }
                    });
                };
            });
        });
    };
    handleVolumeChange();

}, [peers]);

const handleVolumeChange = (e: React.ChangeEvent<HTMLInputElement>, peerId: string) => {
    const newVolume = parseFloat(e.target.value);
    setVolume(prevVolume => ({
        ...prevVolume,
        [peerId]: newVolume === 0 ? 0.0001 : newVolume, 
    }));
    const gainNode = gainNodes.current[peerId];
    if (gainNode) {
        gainNode.gain.value = newVolume === 0 ? 0.0001 : newVolume; 
    }
};

return (
    <div className={classes.root}>
        {Array.from(peers.values()).map((peer, i) => {
            const audioConsumer = peer.getConsumerByType(SERVICE_TYPE.VOICE);
            if (audioConsumer) {
                return (
                    <div key={i}>
                        <input                 
                            type="range"
                            min="0"
                            max="1"
                            step="0.01"
                            value={volume[peer.id.toString()] || 1}
                            onChange={(e) => handleVolumeChange(e, peer.id.toString())}
                            ref={(input) => {
                                if (input) {
                                    input.onmouseup = () => input.blur();
                                }
                            }}
                        />

                    </div>
                );
            }
            return null;
        })}
    </div>
);
reactjs typescript webrtc mediastream audiocontext
1个回答
0
投票

启用麦克风的代码部分位于第二个 useEffect() 部分下的handleVolumeChange() 事件后面。应将其移至其他位置,以便它不依赖于用户首先更改音量。以下是其工作原理的一个想法:

function handleSuccess(stream) {
    handleVolumeChange(stream);
}

navigator.mediaDevices.getUserMedia({ audio: true }).then(handleSuccess);

基本上,将您的

const mic = navigator...
代码移到
const handleVolumeChange = () => {...
部分之外。希望这可以帮助您指明正确的方向。

© www.soinside.com 2019 - 2024. All rights reserved.