By Brian Rinaldi
Typically, the web is a purely visual medium, but audio can be an important part of creating great user experiences – and not just in games. The judicious use of audio can offer the user valuable feedback and context. For example, we’re used to desktop applications giving audio feedback when a new message arrives, when the application has an important notification or even when an action was completed either successfully or unsuccessfully.
With the introduction of the Web Audio API, these kinds of audio interactions are available in the browser. Most demonstrations of this show how to use the Web Audio API to load a predetermined audio file and play it back or even play back a song. However, you can actually create your own sounds and music using oscillators. In this article, we’ll see how to create different types of waves using oscillators, how to tune those to notes and then even how to play notes of a predetermined length in a sequence to create a tune.
A Note on Browser Support
Before we begin I should make it clear that the Web Audio API is still under development. Currently it works in most browsers (except IE). However, the API itself is large and not all features are yet supported across all browsers. Below is the browser support from CanIUse, however this is not broken down by feature.

For the purposes of this article, all the code was tested on the current release of Chrome.
What is an Oscillator?
An oscillator, as the name implies, creates a repetitive, oscillating wave. Classic analog synthesizers (like the Moog and others made popular in the 70’s and 80’s) used low-frequency oscillation, often in common waveforms like sine waves, square waves, triangle waves and sawtooth waves, to create many of the sounds you associate with them. Much of the classic 8-bit game music did the same and this is now common in chiptunes.
This article will focus on using web audio oscillators to create musical notes as this is typically what you will want to achieve as random frequencies in most cases will simply sound wrong.
Basic Waveforms using Web Audio
Creating the basic waveforms I discussed above (i.e. sine, square, triangle and sawtooth) using the Web Audio API is easy. As with everything in the Web Audio API, first you need to create an AudioContext
. Then you simply need to use an OscillatorNode
, choose its type and set its frequency. Finally, you just start the oscillator and connect it to the AudioContext
output.
window.AudioContext = window.AudioContext || window.webkitAudioContext;
var ctx = new AudioContext();
var o = ctx.createOscillator();
o.type = e.currentTarget.id;
o.frequency.value = 261.63;
o.start(0);
o.connect(ctx.destination);
When you want the note to stop, you simply call the stop(0)
method on the oscillator (in the above example, that would be o.stop(0)
). You may be wondering what the 0
in the start
and stop
methods implies. In this case, it simply means start immediately and stop immediately. We’ll discuss more complex usage later.
The below example shows the four standard wave types. Clicking and holding on each type will cause an oscillator of that type to play. If you move the mouse up and down, the frequency of the oscillator will change.
[iajsfiddle fiddle=”84H8b” height=”500px” width=”100%” show=”result,js,resources,html,css” skin=”default”]
I should note that you can do custom waveforms, but those won’t be covered in this article.
Creating Notes with Oscillators
The trick to creating notes is knowing what frequency associates with each particular note. There are a variety of tunings in music, but we’re going to stick with what is called an equal temperament tuning. I’ve borrowed values for the tuning frequencies from the Band.js project.
In the below example, I’ve mapped the tuning frequency to each key on the keyboard. Clicking and holding each key will set the oscillator to that frequency and play the note. Initially you may think you can simply reuse the same oscillator – perhaps thinking that this will be more performance optimized. However, let’s see what happens when we do that.
https://jsfiddle.net/remotesynth/73cD5/embedded/result/
Not the result you expected? When you reuse the same oscillator, the frequency transitions causing an audible slide between notes. The solution is to use a different oscillator each time you play the note. You can reuse the audio context however. Don’t worry about performance, oscillators are cheap.
[iajsfiddle fiddle=”69tQE” height=”300px” width=”100%” show=”result,js,resources,html,css” skin=”default”]
Many of you may notice a brief clipping noise when each note stops. Depending on the frequency of the note, it can be quite obvious. How do we get rid of it? First, we need to discussion gain.
Adjusting Amplitude with Gain
In simple terms, if the AudioContext
is your amplifier, then the gain is the volume knob. Unsurprisingly, in the Web Audio API, we can set the gain using a GainNode
. The gain in a GainNode
is a value between 0, which is not audible, and 1, which is full volume. So, for example, 0.5 would play at a 50% volume. You can actually set the value above 1 and it will not error, but it will be treated as 1 anyway.
Let’s see how we handle this in code.
window.AudioContext = window.AudioContext || window.webkitAudioContext;
var ctx = new AudioContext();
var o = ctx.createOscillator();
o.type = e.currentTarget.id;
o.frequency.value = 261.63;
g = ctx.createGain();
g.gain.value = 1;
o.start(0);
o.connect(ctx.destination);
Now let’s see how we can use the gain to remove the clipping noise from each note. In the example below, we’ve taken the same code from before but instead of simply stopping the note, first we set the gain to zero. The note is actually still playing, it just isn’t audible.
[iajsfiddle fiddle=”qUf9e” height=”300px” width=”100%” show=”result,js,resources,html,css” skin=”default”]
This is a very simple usage of gain. You can do more complex things like transition to a gain at a particular time using gain.linearRampToValueAtTime(value,endTime)
or set a gain at a particular time during the audio playback using gain.setValueAtTime(startTime,value)
. There are even more complex options available.
Playing a Tone of a Particular Length
Whether you are creating music or just giving some form of audio feedback, in most cases you will want this to play for a predetermined length. This is easy to do. Remember the start(0)
and stop(0)
we discussed earlier? Well, all you need to do is adjust those values.
For example if you say start(0)
and stop(1)
, the tone would play for 1 second. It does get slightly more complex though since if you were to say start(0.5)
and stop(1)
, the note would start after a half second delay and play for half a second. Perhaps you thought it would still play for 1 second. It’s easier to think of the time like a playback buffer wherein we’ve moved the note to half a second into the buffer, stopping it at 1 second into the buffer. This becomes much more complicated and important later when we discuss sequencing notes/tones.
Calculating Note Lengths
If you want to set the note length based upon musical notations to create a tune, it takes a little bit of math. First, we need to understand that every measure of a song is broken into beats and every type of note plays for a set number of beats.

We won’t deal with the complexities of ties or dots at this point. Also, for those musicians out there, we’re only discussing simple time signatures.
Now that we know how many beats each note plays, we need to consider that every song has a tempo and that tempo is usually expressed as beats per minute (BPM). For instance, a song playing at 120 BPM will play 2 beats per second (i.e. 120 beats divided by 60 seconds).
We can now use this information to determine how long a particular note should play at a specified tempo. The basis of the formula is this:
- Tempo = BPM = Beats Per Minute
- BPM/60 = Beats Per Second (ex. 120 BPM = 2 beats per second)
- 1 second / Beats Per Second = Length of a Beat in seconds (i.e. 1 beat at 120 BPM is 0.5 seconds)
- Length of Beat per Second * number of beats in note = length of note in seconds (whole note at 120 BPM is 2 seconds)
Converting that formula to JavaScript looks like this.
// 1 second divided by number of beats per second times length of a note
var playlength = 1/(bpm/60) * notelength;
The below example expands upon the earlier ones by allowing you to set a BPM and note length and then calculates how long to play the note when a key is struck.
[iajsfiddle fiddle=”USn9V” height=”300px” width=”100%” show=”result,js,resources,html,css” skin=”default”]
It is important to point out that even though the notes aren’t played in a sequence, the time in the buffer for our audio context has still moved ahead. Thus, in order to get the audio to play a second time, you need to get the point in time where the buffer currently sits, which can be done using ctx.currentTime
(where ctx
is the AudioContext
).
One other thing to note here is that you may hear the clipping sound again. There are ways of dealing with this, but it can be a bit on the complicated side. If you are curious, I’d suggest looking into how Band.js handles it by reviewing the source code.
Playing Tones in a Sequence
In many cases, playing a single tone for a specified length of time is not sufficient. Whether you are creating a song or simply offering an audio cue or feedback, many times you’ll want to play a sequence of notes together. This is probably slightly more complicated than you would expect due, in large part, to the way you handle timing.
Remember that I stated earlier that the start and stop times should be thought of as a continuously running audio buffer. The explanation from spec is much more detailed and is worth sharing.
This is a time in seconds which starts at zero when the context is created and increases in real-time. All scheduled times are relative to it. This is not a transport time which can be started, paused, and re-positioned. It is always moving forward. A GarageBand-like timeline transport system can be very easily built on top of this (in JavaScript). This time corresponds to an ever-increasing hardware timestamp.
In simple terms, if the prior notes took two second, you would do something like this (where o
is the oscillator and lengthOfNote
was calculated using the previously discussed formula):
o.start(2); // start after 2 seconds
o.stop(2 + lengthOfNote);
The tricky part is that you need to keep moving the buffer forward. To do so, you should always start with the current time in the AudioContext
as discussed previously (rather than starting with zero as your audio sequence may play back multiple times over the course of a single page request). Then, as you add each note to the sequence add the length of that note calculated using the formula to the current time. So the start time of each note is the current time of the AudioContext
plus the length of the previously played notes. The end time of the note is the start time plus the length of the note.
In the following example, you can add notes of a particular length to a sequence and play back that sequence. It’s worth noting that if you change the BPM, the length of each note needs to be recalculated (this is a simple example, so you’ll need to reload the iframe to start a new sequence).
[iajsfiddle fiddle=”vq6kN” height=”300px” width=”100%” show=”result,js,resources,html,css” skin=”default”]
As you can see, in this case we are only handling one note sequence at a time, but the same principles apply to creating multiple sequences for more complete songs or complex sounds. Simply start them both simultaneously.
Where to Go From Here
If you are like me, then you may be thinking to yourself that, while each step wasn’t necessarily difficult, putting the whole thing together isn’t easy. And that’s that we haven’t even touched on more complex waveforms, complex gains or anything beyond simple timing and note lengths. If you are just doing simple audio feedback of some sort, that may not be a major hurdle.
However, if you are doing anything more complex and especially if you are trying to do full chiptunes or game music, I highly suggest checking out Band.js. It has solved a lot of the problems for you and allows you to focus on using whatever musical skill and knowledge you may have rather than dealing with the complexities of web audio. In fact, I’ve already written an article on how to use the library that can be found here (though some of my complaints, like the JSON format have already been solved).
Resources
If you are looking for additional resources to help you understand the Web Audio API, here are the ones that helped me along the way. Keep in mind that many of these were written prior to changes to the spec, which can happen when you are dealing with bleeding edge browser technologies.
Web Audio Tutorials
- Web Audio API (O’Reilly) by Boris Smus
- Getting Started with Web Audio API (HTML5 Rocks) by Boris Smus
- Developing Game Audio with the Web Audio API (HTML5 Rocks) by Boris Smus
- Playing notes with the Web Audio API Part 1 – Monophonic Synthesis by Chris Lowis
- Playing notes with the Web Audio API Part 2 – Polyphonic Synthesis by Chris Lowis
- Web Audio API Oscillators by Obadiah Metivier
Web Audio References
- Web Audio API at MDN
- Web Audio API at WebPlatform.org
Nice article Brian! I like the embedded JSFiddles. Also, thanks for including my tutorial in your links.
A comment from Chris Wilson via Twitter: “You can fix the slide caused by dezippering using setValueAtTime instead of frequency.value” https://twitter.com/cwilso/status/451823174275960833
Thanks Cody!
We are working on the Image issue, it’s been intermittent and weird to nail down. Our CDN doesn’t like the redirects :-).
I liked the Web Audio API so much – I ended up using it to add sound to an existing app. (This link is cited in the docs, BTW.) Please have a look.
https://www.pkmurphy.com.au/chordgenerator/