getFloatFrequencyData()
方法在
AnalyserNode
Interface copies the current frequency data into a
Float32Array
array passed into it.
Each item in the array represents the decibel value for a specific frequency. The frequencies are spread linearly from 0 to 1/2 of the sample rate. For example, for
48000
sample rate, the last item of the array will represent the decibel value for
24000
Hz.
If you need higher performance and don't care about precision, you can use
AnalyserNode.getByteFrequencyData()
instead, which works on a
Uint8Array
.
var audioCtx = new AudioContext(); var analyser = audioCtx.createAnalyser(); var dataArray = new Float32Array(analyser.frequencyBinCount); // Float32Array should be the same length as the frequencyBinCount void analyser.getFloatFrequencyData(dataArray); // fill the Float32Array with data returned from getFloatFrequencyData()
array
Float32Array
that the frequency domain data will be copied to. For any sample which is silent, the value is
-
Infinity
.
AnalyserNode.frequencyBinCount
, excess elements are dropped. If it has more elements than needed, excess elements are ignored.
None.
const audioCtx = new AudioContext(); const analyser = audioCtx.createAnalyser(); // Float32Array should be the same length as the frequencyBinCount const myDataArray = new Float32Array(analyser.frequencyBinCount); // fill the Float32Array with data returned from getFloatFrequencyData() analyser.getFloatFrequencyData(myDataArray);
The following example shows basic usage of an
AudioContext
to connect a
MediaElementAudioSourceNode
to an
AnalyserNode
. While the audio is playing, we collect the frequency data repeatedly with
requestAnimationFrame()
and draw a "winamp bargraph style" to a
<canvas>
元素。
For more complete applied examples/information, check out our Voice-change-O-matic-float-data demo (see the 源代码 too).
<!doctype html>
<body>
<script>
const audioCtx = new AudioContext();
//Create audio source
//Here, we use an audio file, but this could also be e.g. microphone input
const audioEle = new Audio();
audioEle.src = 'my-audio.mp3';//insert file name here
audioEle.autoplay = true;
audioEle.preload = 'auto';
const audioSourceNode = audioCtx.createMediaElementSource(audioEle);
//Create analyser node
const analyserNode = audioCtx.createAnalyser();
analyserNode.fftSize = 256;
const bufferLength = analyserNode.frequencyBinCount;
const dataArray = new Float32Array(bufferLength);
//Set up audio node network
audioSourceNode.connect(analyserNode);
analyserNode.connect(audioCtx.destination);
//Create 2D canvas
const canvas = document.createElement('canvas');
canvas.style.position = 'absolute';
canvas.style.top = 0;
canvas.style.left = 0;
canvas.width = window.innerWidth;
canvas.height = window.innerHeight;
document.body.appendChild(canvas);
const canvasCtx = canvas.getContext('2d');
canvasCtx.clearRect(0, 0, canvas.width, canvas.height);
function draw() {
//Schedule next redraw
requestAnimationFrame(draw);
//Get spectrum data
analyserNode.getFloatFrequencyData(dataArray);
//Draw black background
canvasCtx.fillStyle = 'rgb(0, 0, 0)';
canvasCtx.fillRect(0, 0, canvas.width, canvas.height);
//Draw spectrum
const barWidth = (canvas.width / bufferLength) * 2.5;
let posX = 0;
for (let i = 0; i < bufferLength; i++) {
const barHeight = (dataArray[i] + 140) * 2;
canvasCtx.fillStyle = 'rgb(' + Math.floor(barHeight + 100) + ', 50, 50)';
canvasCtx.fillRect(posX, canvas.height - barHeight / 2, barWidth, barHeight / 2);
posX += barWidth + 1;
}
};
draw();
</script>
</body>
| 规范 | 状态 | 注释 |
|---|---|---|
|
Web 音频 API
The definition of 'getFloatFrequencyData()' in that specification. |
工作草案 |
| 桌面 | 移动 | |||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
getFloatFrequencyData
|
Chrome 14 | Edge 12 | Firefox 25 | IE 不支持 No | Opera 15 | Safari 6 | WebView Android Yes | Chrome Android 18 | Firefox Android 26 | Opera Android 14 | Safari iOS 6 | Samsung Internet Android 1.0 |
完整支持
不支持
AnalyserNode
getByteFrequencyData()
getByteTimeDomainData()
getFloatFrequencyData()
getFloatTimeDomainData()
AudioBuffer
AudioBufferSourceNode
AudioContext
AudioContextOptions
AudioDestinationNode
AudioListener
AudioNode
AudioNodeOptions
AudioParam
AudioProcessingEvent
AudioScheduledSourceNode
AudioWorklet
AudioWorkletGlobalScope
AudioWorkletNode
AudioWorkletProcessor
BaseAudioContext
BiquadFilterNode
ChannelMergerNode
ChannelSplitterNode
ConstantSourceNode
ConvolverNode
DelayNode
DynamicsCompressorNode
GainNode
IIRFilterNode
MediaElementAudioSourceNode
MediaStreamAudioDestinationNode
MediaStreamAudioSourceNode
OfflineAudioCompletionEvent
OfflineAudioContext
OscillatorNode
PannerNode
PeriodicWave
StereoPannerNode
WaveShaperNode