本文介绍了来自WebSocket的Webaudio播放有辍学的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个软件定义的无线电播放来自WebSocket服务器的音频流,还有一个客户端,它使用数据并使用AudioBufferSourceNode播放数据.

I have a software-defined radio playing an audio stream from a WebSocket server, and a client which consumes the data and plays it using an AudioBufferSourceNode.

大多数情况下有效.唯一的问题是,每隔几秒钟就会有短暂的中断,这大概是由创建每个连续的AudioBufferSourceNode实例所涉及的开销引起的. WebAudio草案规范指出,应使用AudioBuffer播放不超过一分钟左右的声音,并应使用MediaElementSourceNode播放更长的声音.这对我来说不起作用,因为我需要从WebSocket来源播放音频,而且我还不知道如何使媒体元素(例如HTML5音频元素)与WebSocket一起工作.

It mostly works. The only problem is that there are momentary dropouts every few seconds, presumably caused by the overhead involved in creating each successive AudioBufferSourceNode instance. The WebAudio draft spec says that AudioBuffer should be used for playing sounds that are no longer than a minute or so, and that longer sounds should be played using a MediaElementSourceNode. That doesn't work for me, because I need to play audio from a WebSocket source, and there's no way that I know of to make a media element (e.g. HTML5 audio element) work with a WebSocket.

也许我正在尝试通过将AudioBufferSourceNode实例串在一起并期望它们无缝地连续播放来实现WebAudio不支持的功能.但是似乎应该有一种方法可以通过WebAudio播放WebSocket数据,实际上 auroa.js (似乎与 aurora-websocket.js 插件一起使用.我使用aurora.js编写了一个客户端代码,但是遇到了其他问题,为此我在Github上创建了一个auroa.js问题.同时,我希望我可以在客户端中完成他们似乎使用WebAudio从WebSocket无缝播放数据的过程.

Maybe I'm trying to do something WebAudio can't support by stringing AudioBufferSourceNode instances together and expecting them to play one after another seamlessly. But it seems there should be a way to play WebSocket data through WebAudio, and indeed auroa.js (together with the aurora-websocket.js plugin) seems to do it. I coded up a client using aurora.js, but I ran into other problems, for which I created an auroa.js issue on Github. In the meantime, I'm hoping that I can do in my client what they seem to have done wrt using WebAudio to play data seamlessly from a WebSocket.

这是我的代码的俯视图,以显示我正在使用的实现.

Here is an elided view of my code, to show the implementation I'm using.

var context = ...
var gainNode = ...

var playBuffer = function(buf) {
   var source = context.createBufferSource();
   source.buffer = buf;
   source.connect(gainNode);
   source.start();
};

var socket = ...
socket.binaryType = 'arraybuffer';
socket.addBinaryListener(function (data) {
     context.decodeAudioData(data, playBuffer);
});
socket.connect...

我还尝试了一种实现,其中跟踪从WebSocket传入的缓冲区,并在从前一个AudioBufferSourceNode接收到结束"事件之后,通过AudioBufferSourceNode以接收到的顺序播放它们.这具有与上述实现相同的辍学问题.

I also tried an implementation wherein I keep track of incoming buffers from the WebSocket and play them in the order received, via an AudioBufferSourceNode, after the 'ended' event is received from the previous AudioBufferSourceNode. This has the same dropout problem that the above implementation has.

推荐答案

您的流真的可以确保在每个网络块中获得完整的音频文件吗? (decodeAudioData不适用于部分MP3块.)

Your stream is really guaranteed to get complete audio files in each network chunk? (decodeAudioData does not work with partial MP3 chunks.)

似乎(从上面的代码段中)您只是依靠网络时序来在适当的时间启动流块?保证不能正确排队;您需要在流中保留一点延迟(以处理不一致的网络),并仔细安排每个块. (上面让我感到畏缩的一点是source.start()-没有时间参数,不会让这些块一个接一个地安排.即:

It seems like (from the code snippet above) you're just relying on network timing to get the stream chunks started at the right time? That's guaranteed not to line up properly; you need to keep a bit of latency in the stream (to handle inconsistent network), and carefully schedule each chunk. (The bit above that makes me cringe is source.start() - with no time param that will keep the chunks scheduled one right after another. i.e.:

var nextStartTime = 0;

function addChunkToQueue( buffer ) {
    if (!nextStartTime) {
        // we've not yet started the queue - just queue this up,
        // leaving a "latency gap" so we're not desperately trying
        // to keep up.  Note if the network is slow, this is going
        // to fail.  Latency gap here is 1 second.
        nextStartTime = audioContext.currentTime + 1; 
    }
    var bsn = audioContext.createBufferSource();
    bsn.buffer = buffer;
    bsn.connect( audioContext.destination );
    bsn.start( nextStartTime );

    // Ensure the next chunk will start at the right time
    nextStartTime += buffer.duration;
}

此外,根据您的数据块大小,我想知道垃圾回收是否不是造成此问题的原因.您应该在分析器中检出它.

In addition, depending on how big your chunks are, I'd wonder if garbage collection isn't contributing to the problem. You should check it out in the profiler.

预期的路径不能很好地工作;它依赖于JS事件处理,并且仅在音频系统完成播放后才触发;所以总会有差距.

The onended path is not going to work well; it's reliant on JS event handling, and only fires AFTER the audio system is done playing; so there will ALWAYS be a gap.

最后-如果声音流与默认音频设备的采样率不匹配,则此方法将无法正常工作;总是会有点击,因为解码音频数据将重新采样到设备速率,而设备速率将不会有完美的持续时间.它会起作用,但是可能会在块的边界处出现诸如点击之类的伪影.您需要一个尚未指定或尚未实现的功能-可选的AudioContext采样率-才能解决此问题.

Finally - this is not going to work well if the sound stream does not match the default audio device's sample rate; there are always going to be clicks, because decodeAudioData will resample to the device rate, which will not have a perfect duration. It will work, but there will likely be artifacts like clicks at the boundaries of chunks. You need a feature that's not yet spec'ed or implemented - selectable AudioContext sample rates - in order to fix this.

这篇关于来自WebSocket的Webaudio播放有辍学的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

10-18 15:38