The AudioContext interface represents an audio-processing graph built from audio modules linked together, each represented by an AudioNode. An audio context controls both the creation of the nodes it contains and the execution of the audio processing, or decoding. You need to create an AudioContext before you do anything else, as everything happens inside a context.
An AudioContext can be a target of events, therefore it implements the EventTarget interface.
<div id="interfaceDiagram" style="display: inline-block; position: relative; width: 100%; padding-bottom: 11.666666666666666%; vertical-align: middle; overflow: hidden;"><svg style="display: inline-block; position: absolute; top: 0; left: 0;" viewbox="-50 0 600 70" preserveAspectRatio="xMinYMin meet"><a xlink:href="https://developer.mozilla.org/en-US/docs/Web/API/EventTarget" target="_top"><rect x="1" y="1" width="110" height="50" fill="#fff" stroke="#D4DDE4" stroke-width="2px" /><text x="56" y="30" font-size="12px" font-family="Consolas,Monaco,Andale Mono,monospace" fill="#4D4E53" text-anchor="middle" alignment-baseline="middle">EventTarget</text></a><polyline points="111,25 121,20 121,30 111,25" stroke="#D4DDE4" fill="none"/><line x1="121" y1="25" x2="151" y2="25" stroke="#D4DDE4"/><a xlink:href="https://developer.mozilla.org/en-US/docs/Web/API/AudioContext" target="_top"><rect x="151" y="1" width="120" height="50" fill="#F4F7F8" stroke="#D4DDE4" stroke-width="2px" /><text x="211" y="30" font-size="12px" font-family="Consolas,Monaco,Andale Mono,monospace" fill="#4D4E53" text-anchor="middle" alignment-baseline="middle">AudioContext</text></a></svg></div>
a:hover text { fill: #0095DD; pointer-events: all;}
Constructor
AudioContext()- Creates and returns a new
AudioContextobject.
Properties
AudioContext.currentTimeRead only- Returns a double representing an ever-increasing hardware time in seconds used for scheduling. It starts at
0. AudioContext.destinationRead only- Returns an
AudioDestinationNoderepresenting the final destination of all audio in the context. It can be thought of as the audio-rendering device. AudioContext.listenerRead only- Returns the
AudioListenerobject, used for 3D spatialization. AudioContext.sampleRateRead only- Returns a float representing the sample rate (in samples per second) used by all nodes in this context. The sample-rate of an
AudioContextcannot be changed. AudioContext.stateRead only- Returns the current state of the
AudioContext. AudioContext.mozAudioChannelTypeRead only- Used to return the audio channel that the sound playing in an
AudioContextwill play in, on a Firefox OS device.
Event handlers
AudioContext.onstatechange- An event handler that runs when an event of type
statechangehas fired. This occurs when theAudioContext's state changes, due to the calling of one of the state change methods (AudioContext.suspend,AudioContext.resume, orAudioContext.close).
Methods
Also implements methods from the interface EventTarget.
AudioContext.close()- Closes the audio context, releasing any system audio resources that it uses.
AudioContext.createBuffer()- Creates a new, empty
AudioBufferobject, which can then be populated by data and played via anAudioBufferSourceNode. AudioContext.createConstantSource()- Creates a
ConstantSourceNodeobject, which is an audio source that continuously outputs a monaural (one-channel) sound signal whose samples all have the same value. AudioContext.createBufferSource()- Creates an
AudioBufferSourceNode, which can be used to play and manipulate audio data contained within anAudioBufferobject.AudioBuffers are created usingAudioContext.createBufferor returned byAudioContext.decodeAudioDatawhen it successfully decodes an audio track. AudioContext.createMediaElementSource()- Creates a
MediaElementAudioSourceNodeassociated with anHTMLMediaElement. This can be used to play and manipulate audio from<video>or<audio>elements. AudioContext.createMediaStreamSource()- Creates a
MediaStreamAudioSourceNodeassociated with aMediaStreamrepresenting an audio stream which may come from the local computer microphone or other sources. AudioContext.createMediaStreamDestination()- Creates a
MediaStreamAudioDestinationNodeassociated with aMediaStreamrepresenting an audio stream which may be stored in a local file or sent to another computer. AudioContext.createScriptProcessor()- Creates a
ScriptProcessorNode, which can be used for direct audio processing via JavaScript. AudioContext.createStereoPanner()- Creates a
StereoPannerNode, which can be used to apply stereo panning to an audio source. AudioContext.createAnalyser()- Creates an
AnalyserNode, which can be used to expose audio time and frequency data and for example to create data visualisations. AudioContext.createBiquadFilter()- Creates a
BiquadFilterNode, which represents a second order filter configurable as several different common filter types: high-pass, low-pass, band-pass, etc. AudioContext.createChannelMerger()- Creates a
ChannelMergerNode, which is used to combine channels from multiple audio streams into a single audio stream. AudioContext.createChannelSplitter()- Creates a
ChannelSplitterNode, which is used to access the individual channels of an audio stream and process them separately. AudioContext.createConvolver()- Creates a
ConvolverNode, which can be used to apply convolution effects to your audio graph, for example a reverberation effect. AudioContext.createDelay()- Creates a
DelayNode, which is used to delay the incoming audio signal by a certain amount. This node is also useful to create feedback loops in a Web Audio API graph. AudioContext.createDynamicsCompressor()- Creates a
DynamicsCompressorNode, which can be used to apply acoustic compression to an audio signal. AudioContext.createGain()- Creates a
GainNode, which can be used to control the overall volume of the audio graph. AudioContext.createIIRFilter()- Creates an
IIRFilterNode, which represents a second order filter configurable as several different common filter types. AudioContext.createOscillator()- Creates an
OscillatorNode, a source representing a periodic waveform. It basically generates a tone. AudioContext.createPanner()- Creates a
PannerNode, which is used to spatialise an incoming audio stream in 3D space. AudioContext.createPeriodicWave()- Creates a
PeriodicWave, used to define a periodic waveform that can be used to determine the output of anOscillatorNode. AudioContext.createWaveShaper()- Creates a
WaveShaperNode, which is used to implement non-linear distortion effects. AudioContext.createAudioWorker()- Creates an
AudioWorkerNode, which can interact with a web worker thread to generate, process, or analyse audio directly. This was added to the spec on August 29 2014, and is not implemented in any browser yet. AudioContext.decodeAudioData()- Asynchronously decodes audio file data contained in an
ArrayBuffer. In this case, the ArrayBuffer is usually loaded from anXMLHttpRequest'sresponseattribute after setting theresponseTypetoarraybuffer. This method only works on complete files, not fragments of audio files. AudioContext.getOutputTimestamp()- Returns a new
AudioTimestampcontaining two correlated context's audio stream position values: theAudioTimestamp.contextTimemember contains the time of the sample frame which is currently being rendered by the audio output device (i.e., output audio stream position), in the same units and origin as context'sAudioContext.currentTime; the AudioTimestamp.performanceTime member contains the time estimating the moment when the sample frame corresponding to the stored contextTime value was rendered by the audio output device, in the same units and origin asperformance.now(). AudioContext.resume()- Resumes the progression of time in an audio context that has previously been suspended.
AudioContext.suspend()- Suspends the progression of time in the audio context, temporarily halting audio hardware access and reducing CPU/battery usage in the process.
Obsolete methods
AudioContext.createJavaScriptNode()- Creates a
JavaScriptNode, used for direct audio processing via JavaScript. This method is obsolete, and has been replaced byAudioContext.createScriptProcessor(). AudioContext.createWaveTable()- Creates a
WaveTableNode, used to define a periodic waveform. This method is obsolete, and has been replaced byAudioContext.createPeriodicWave().
Examples
Basic audio context declaration:
var audioCtx = new AudioContext();
Cross browser variant:
var AudioContext = window.AudioContext || window.webkitAudioContext; var audioCtx = new AudioContext(); var oscillatorNode = audioCtx.createOscillator(); var gainNode = audioCtx.createGain(); var finish = audioCtx.destination; // etc.
Specifications
| Specification | Status | Comment |
|---|---|---|
| Web Audio API The definition of 'AudioContext' in that specification. |
Working Draft |
Browser compatibility
| Feature | Chrome | Edge | Firefox (Gecko) | Internet Explorer | Opera | Safari (WebKit) |
|---|---|---|---|---|---|---|
| Basic support | 10.0webkit 35 |
(Yes) | 25.0 (25.0) | No support | 15.0webkit 22 |
6.0webkit |
createStereoPanner() |
42.0 | (Yes) | 37.0 (37.0) | No support | No support | No support |
onstatechange, state, suspend(), resume() |
(Yes) | (Yes) | 40.0 (40.0) | No support | No support | 8.0 |
createConstantSource() |
56.0 | No support | 52 (52) | No support | 43 | No support |
| Unprefixed | (Yes) | (Yes) |
| Feature | Android Webview | Edge | Firefox Mobile (Gecko) | Firefox OS | IE Mobile | Opera Mobile | Safari Mobile | Chrome for Android |
|---|---|---|---|---|---|---|---|---|
| Basic support | (Yes) | (Yes) | 37.0 (37.0) | 2.2 | No support | (Yes) | No support | (Yes) |
createStereoPanner() |
42.0 | (Yes) | (Yes) | (Yes) | No support | No support | No support | 42.0 |
onstatechange, state, suspend(), resume() |
(Yes) | (Yes) | (Yes) | (Yes) | No support | No support | No support | (Yes) |
createConstantSource() |
56.0 | No support | 52.0 (52) | No support | No support | No support | No support | 56.0 |
| Unprefixed | (Yes) | (Yes) | ? | ? | ? | 43 | ? | (Yes) |