decodeAudioData()
方法在
BaseAudioContext
Interface is used to asynchronously decode audio file data contained in an
ArrayBuffer
. In this case the
ArrayBuffer
is loaded from
XMLHttpRequest
and
FileReader
. The decoded
AudioBuffer
is resampled to the
AudioContext
's sampling rate, then passed to a callback or promise.
This is the preferred method of creating an audio source for Web Audio API from an audio track. This method only works on complete file data, not fragments of audio file data.
Older callback syntax:
baseAudioContext.decodeAudioData(ArrayBuffer, successCallback, errorCallback);
Newer promise-based syntax:
Promise<decodedData> baseAudioContext.decodeAudioData(ArrayBuffer);
XMLHttpRequest
,
WindowOrWorkerGlobalScope.fetch()
or
FileReader
.
AudioBuffer
表示
decodedData
(the decoded PCM audio data). Usually you'll want to put the decoded data into an
AudioBufferSourceNode
, from which it can be played and manipulated how you want.
An optional error callback, to be invoked if an error occurs when the audio data is being decoded.
Void, or a
Promise
object that fulfills with the
decodedData
.
In this section we will first cover the older callback-based system and then the newer promise-based syntax.
在此范例中,
getData()
function uses XHR to load an audio track, setting the
responseType
of the request to
arraybuffer
so that it returns an array buffer as its
response
that we then store in the
audioData
variable . We then pass this buffer into a
decodeAudioData()
function; the success callback takes the successfully decoded PCM data, puts it into an
AudioBufferSourceNode
created using
AudioContext.createBufferSource()
, connects the source to the
AudioContext.destination
and sets it to loop.
The buttons in the example simply run
getData()
to load the track and start it playing, and stop it playing, respectively. When the
stop()
method is called on the source, the source is cleared out.
注意 : You can run the example live (或 view the source )。
// define variables
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var source;
var pre = document.querySelector('pre');
var myScript = document.querySelector('script');
var play = document.querySelector('.play');
var stop = document.querySelector('.stop');
// use XHR to load an audio track, and
// decodeAudioData to decode it and stick it in a buffer.
// Then we put the buffer into the source
function getData() {
source = audioCtx.createBufferSource();
var request = new XMLHttpRequest();
request.open('GET', 'viper.ogg', true);
request.responseType = 'arraybuffer';
request.onload = function() {
var audioData = request.response;
audioCtx.decodeAudioData(audioData, function(buffer) {
source.buffer = buffer;
source.connect(audioCtx.destination);
source.loop = true;
},
function(e){ console.log("Error with decoding audio data" + e.err); });
}
request.send();
}
// wire up buttons to stop and play audio
play.onclick = function() {
getData();
source.start(0);
play.setAttribute('disabled', 'disabled');
}
stop.onclick = function() {
source.stop(0);
play.removeAttribute('disabled');
}
// dump script to pre element
pre.innerHTML = myScript.innerHTML;
ctx.decodeAudioData(audioData).then(function(decodedData) {
// use the decoded data here
});
| 规范 | 状态 | 注释 |
|---|---|---|
|
Web 音频 API
The definition of 'decodeAudioData()' in that specification. |
工作草案 |
| 桌面 | 移动 | |||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
decodeAudioData
|
Chrome
10
Prefixed
|
Edge ≤18 |
Firefox
53
注意事项
|
IE 不支持 No |
Opera
22
|
Safari
6
Prefixed
|
WebView Android Yes | Chrome Android 33 |
Firefox Android
53
注意事项
|
Opera Android
22
|
Safari iOS
6
Prefixed
|
Samsung Internet Android 2.0 |
| Promise-based syntax | Chrome 49 | Edge ≤79 |
Firefox
53
注意事项
|
IE 不支持 No | Opera Yes | Safari 不支持 No | WebView Android 49 | Chrome Android 49 |
Firefox Android
53
注意事项
|
Opera Android ? | Safari iOS 不支持 No | Samsung Internet Android 5.0 |
完整支持
不支持
兼容性未知
见实现注意事项。
要求使用供应商前缀或不同名称。
BaseAudioContext
createAnalyser()
createBiquadFilter()
createBuffer()
createBufferSource()
createChannelMerger()
createChannelSplitter()
createConstantSource()
createConvolver()
createDelay()
createDynamicsCompressor()
createGain()
createIIRFilter()
createOscillator()
createPanner()
createPeriodicWave()
createScriptProcessor()
createStereoPanner()
createWaveShaper()
decodeAudioData()
AnalyserNode
AudioBuffer
AudioBufferSourceNode
AudioContext
AudioContextOptions
AudioDestinationNode
AudioListener
AudioNode
AudioNodeOptions
AudioParam
AudioProcessingEvent
AudioScheduledSourceNode
AudioWorklet
AudioWorkletGlobalScope
AudioWorkletNode
AudioWorkletProcessor
BiquadFilterNode
ChannelMergerNode
ChannelSplitterNode
ConstantSourceNode
ConvolverNode
DelayNode
DynamicsCompressorNode
GainNode
IIRFilterNode
MediaElementAudioSourceNode
MediaStreamAudioDestinationNode
MediaStreamAudioSourceNode
OfflineAudioCompletionEvent
OfflineAudioContext
OscillatorNode
PannerNode
PeriodicWave
StereoPannerNode
WaveShaperNode