createBuffer() 方法在 BaseAudioContext Interface is used to create a new, empty AudioBuffer object, which can then be populated by data, and played via an AudioBufferSourceNode

For more details about audio buffers, check out the AudioBuffer reference page.

注意 : createBuffer() used to be able to take compressed data and give back decoded samples, but this ability was removed from the spec, because all the decoding was done on the main thread, therefore createBuffer() was blocking other code execution. The asynchronous method decodeAudioData() does the same thing — takes compressed audio, say, an MP3 file, and directly gives you back an AudioBuffer that you can then set to play via in an AudioBufferSourceNode . For simple uses like playing an MP3, decodeAudioData() is what you should be using.

句法

var buffer = baseAudioContext.createBuffer(numOfchannels, length, sampleRate);
					

参数

注意 : For an in-depth explanation of how audio buffers work, and what these parameters mean, read Audio buffers: frames, samples and channels from our Basic concepts guide.

numOfChannels

An integer representing the number of channels this buffer should have. The default value is 1, and all user agents must support at least 32 channels.

length
An integer representing the size of the buffer in sample-frames (where each sample-frame is the size of a sample in bytes multiplied by numOfChannels ). To determine the length to use for a specific number of seconds of audio, use numSeconds * sampleRate .
sampleRate

The sample rate of the linear audio data in sample-frames per second. All browsers must support sample rates in at least the range 8,000 Hz to 96,000 Hz.

返回

AudioBuffer configured based on the specified options.

异常

NotSupportedError
One or more of the options are negative or otherwise has an invalid value (such as numberOfChannels being higher than supported, or a sampleRate outside the nominal range).
RangeError

There isn't enough memory available to allocate the buffer.

范例

First, a couple of simple trivial examples, to help explain how the parameters are used:

var audioCtx = new AudioContext();
var buffer = audioCtx.createBuffer(2, 22050, 44100);
					

If you use this call, you will get a stereo buffer (two channels), that, when played back on an AudioContext running at 44100Hz (very common, most normal sound cards run at this rate), will last for 0.5 seconds: 22050 frames / 44100Hz = 0.5 seconds.

var audioCtx = new AudioContext();
var buffer = audioCtx.createBuffer(1, 22050, 22050);
					

If you use this call, you will get a mono buffer (one channel), that, when played back on an AudioContext running at 44100Hz, will be automatically *resampled* to 44100Hz (and therefore yield 44100 frames), and last for 1.0 second: 44100 frames / 44100Hz = 1 second.

注意 : audio resampling is very similar to image resizing: say you've got a 16 x 16 image, but you want it to fill a 32x32 area: you resize (resample) it. the result has less quality (it can be blurry or edgy, depending on the resizing algorithm), but it works, and the resized image takes up less space. Resampled audio is exactly the same — you save space, but in practice you will be unable to properly reproduce high frequency content (treble sound).

Now let's look at a more complex createBuffer() example, in which we create a three second buffer, fill it with white noise, and then play it via an AudioBufferSourceNode . The comment should clearly explain what is going on. You can also run the code live ,或 view the source .

var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
// Create an empty three-second stereo buffer at the sample rate of the AudioContext
var myArrayBuffer = audioCtx.createBuffer(2, audioCtx.sampleRate * 3, audioCtx.sampleRate);
// Fill the buffer with white noise;
// just random values between -1.0 and 1.0
for (var channel = 0; channel < myArrayBuffer.numberOfChannels; channel++) {
  // This gives us the actual ArrayBuffer that contains the data
  var nowBuffering = myArrayBuffer.getChannelData(channel);
  for (var i = 0; i < myArrayBuffer.length; i++) {
    // Math.random() is in [0; 1.0]
    // audio needs to be in [-1.0; 1.0]
    nowBuffering[i] = Math.random() * 2 - 1;
  }
}
// Get an AudioBufferSourceNode.
// This is the AudioNode to use when we want to play an AudioBuffer
var source = audioCtx.createBufferSource();
// set the buffer in the AudioBufferSourceNode
source.buffer = myArrayBuffer;
// connect the AudioBufferSourceNode to the
// destination so we can hear the sound
source.connect(audioCtx.destination);
// start the source playing
source.start();
					

规范

规范 状态 注释
Web 音频 API
The definition of 'createBuffer()' in that specification.
工作草案

浏览器兼容性

The compatibility table on this page is generated from structured data. If you'd like to contribute to the data, please check out https://github.com/mdn/browser-compat-data and send us a pull request. 更新 GitHub 上的兼容性数据
桌面 移动
Chrome Edge Firefox Internet Explorer Opera Safari Android webview Chrome for Android Firefox for Android Opera for Android Safari on iOS Samsung Internet
createBuffer Chrome 10 Prefixed
10 Prefixed
Prefixed Implemented with the vendor prefix: webkit
Edge ≤18 Firefox 53 注意事项
53 注意事项
Originally implemented on AudioContext in Firefox 25.
IE 不支持 No Opera 22
22
15 Prefixed
Prefixed Implemented with the vendor prefix: webkit
Safari 6 Prefixed
6 Prefixed
Prefixed Implemented with the vendor prefix: webkit
WebView Android Yes Chrome Android 33 Firefox Android 53 注意事项
53 注意事项
Originally implemented on AudioContext in Firefox Android 26.
Opera Android 22
22
14 Prefixed
Prefixed Implemented with the vendor prefix: webkit
Safari iOS 6 Prefixed
6 Prefixed
Prefixed Implemented with the vendor prefix: webkit
Samsung Internet Android 2.0

图例

完整支持

完整支持

不支持

不支持

见实现注意事项。

要求使用供应商前缀或不同名称。

要求使用供应商前缀或不同名称。

另请参阅

元数据

  • 最后修改:
  1. Web 音频 API
  2. BaseAudioContext
  3. 特性
    1. audioWorklet
    2. currentTime
    3. destination
    4. listener
    5. sampleRate
    6. state
  4. 方法
    1. createAnalyser()
    2. createBiquadFilter()
    3. createBuffer()
    4. createBufferSource()
    5. createChannelMerger()
    6. createChannelSplitter()
    7. createConstantSource()
    8. createConvolver()
    9. createDelay()
    10. createDynamicsCompressor()
    11. createGain()
    12. createIIRFilter()
    13. createOscillator()
    14. createPanner()
    15. createPeriodicWave()
    16. createScriptProcessor()
    17. createStereoPanner()
    18. createWaveShaper()
    19. decodeAudioData()
  5. 继承:
    1. EventTarget
  6. Related pages for Web Audio API
    1. AnalyserNode
    2. AudioBuffer
    3. AudioBufferSourceNode
    4. AudioContext
    5. AudioContextOptions
    6. AudioDestinationNode
    7. AudioListener
    8. AudioNode
    9. AudioNodeOptions
    10. AudioParam
    11. AudioProcessingEvent
    12. AudioScheduledSourceNode
    13. AudioWorklet
    14. AudioWorkletGlobalScope
    15. AudioWorkletNode
    16. AudioWorkletProcessor
    17. BiquadFilterNode
    18. ChannelMergerNode
    19. ChannelSplitterNode
    20. ConstantSourceNode
    21. ConvolverNode
    22. DelayNode
    23. DynamicsCompressorNode
    24. GainNode
    25. IIRFilterNode
    26. MediaElementAudioSourceNode
    27. MediaStreamAudioDestinationNode
    28. MediaStreamAudioSourceNode
    29. OfflineAudioCompletionEvent
    30. OfflineAudioContext
    31. OscillatorNode
    32. PannerNode
    33. PeriodicWave
    34. StereoPannerNode
    35. WaveShaperNode

版权所有  © 2014-2026 乐数软件    

工业和信息化部: 粤ICP备14079481号-1