Do you need a DAC?

Oct 20, 22
Do you need a DAC?

 

If you’ve started down the rabbit hole of audiophile gear, you’ve probably come across folks out there imploring you to buy a digital to analog converter (DAC). It can be a little startling being told that you don’t have the right equipment, but before you go racing off to figure out how much money you’re going to be blowing: read this article first to know if you actually need one. Chances are good that you’re completely fine as is.

This is a long article where I try to be as complete as possible, so feel free to skip around. I just don’t want anyone to see this and feel like they were misled or I glossed over something important.

 

What is a DAC?

A DAC simply converts a digital audio signal into an analog one so that you can play the sound over headphones or speakers. It’s that simple! DAC chips are found in the source component you’re listening to, whether it’s a laptop, a portable music player or a smartphone, though the analog headphone jack seems to be a dying feature (Editors’ note: a pox on your house, Apple).

A DAC simply converts a digital audio signal into an analog one so that your headphones can then create sound.

Much like headphone amplifiers, standalone DACs came about as a response to poor audio quality at the consumer level. High-end headphones and speakers could reveal source components, their DACs and output stages as the weakest links in the audio chain. This became particularly apparent when consumers started using their PCs as an audio source. Sometimes the DAC would have poor filtering, would be improperly shielded—introducing noise—or the power supply might be poorly regulated, impacting the quality of the rendered output. Lower sample rates, badly encoded MP3s… there were tons of things that children of the 90s had to deal with when it came to audio. Who wants to listen to low-quality tunes?

Related: Troubleshooting DAC issues

But digital music has come a long way since then. Better tech has made shortcomings of even the cheapest chips practically inaudible, while digital music has exploded in quality past the point of diminishing returns. Where it used to be true that your digital Walkman or laptop’s internal DAC chip wouldn’t be suitable for high-bitrate listening, there are plenty of portable devices nowadays that can keep up just fine.

When do I need a DAC?

The main (real) reason you’d get a new DAC today is that your current system—be it your computer, smartphone, or home system—has noticeable noise, objectionable distortion or artifacts, or is incapable of operating at the bitrate of your audio files. If you already have an external DAC and are running into any of those issues, you should check out this article.

Otherwise, if you’ve convinced yourself that your existing DAC is the limiting factor in your playback system and that upgrading it will yield a worthwhile improvement, then that too might be considered a reason to splurge. This would also fall under the category of “looking for something to spend money on,” which isn’t the best use of your funds in terms of upgrade opportunities.

Because DACs are a largely spec-driven item, you can almost always pick out the one you need simply by looking at the packaging. FiiO makes plenty good products for cheap, and if you want an amplifier to go along with the DAC so you never have to worry about that either, their E10K is a solid pick for under $100. You could also decide to throw money at the problem by picking up an ODAC or O2 amp + ODAC combo, but that may be overkill. But seriously, don’t sink too much money into this. It’s just not worth it.

How does a DAC work?

 

All audio, whether it’s stored on vinyl or in an MP3 forms a compression wave when it’s played back. When computers record an analog signal, typically it will be displayed in what’s called a waveform, or a representation of the wave where the Y axis is amplitude (how powerful the wave is), and the X axis is time. Each wave will have a crest and valley—called a cycle—and how many cycles there are in a second is called frequency (displayed as Hz). If you’ve heard that word before, you know that what frequency a sound is also corresponds to what note it is. The higher the frequency, the higher the note.

The job of the DAC is to take a digital samples that make up a stored recording and turn it back into a nice continuous analog signal. To do that, it needs to translate the bits of data from digital files into an analog electrical signal at thousands of set times per second, otherwise known as samples. The unit then outputs a wave that intersects all those points. Now, because DACs aren’t perfect, sometimes this leads to problems. These problems are jitter, high frequency mirroring, narrow dynamic range, and limited bitrate.

Before launching into the nuts and bolts of how everything works, you need to know three terms: bitrate, bit depth, and sample rate. Bitrate simply refers to how much data is expressed per second. Sample rate refers to how many samples of data are taken in a second, and bit depth refers to how much data is recorded per sample.

What is jitter?

 

I’m going to preface this section just like I addressed it in the audio cable myths article: Jitter is mostly a theoretical problem at this point, and extremely unlikely to rear its head in any equipment made in the last decade. However, it’s still useful to know what it is and when it might be an issue, so let’s dive in.

So remember how I said that sample rate can lead to some problems? Jitter is one that gets a lot of attention, but not much understanding. Jitter is a phenomenon that occurs when the clock, which tells the DAC when to convert each sample, isn’t as accurate as it should be. When the sample points aren’t being converted when they should, this can lead to a change in pitch for very short periods of time. The higher the note that’s being reproduced, the more significant the pitch error.

You're unlikely to encounter noticeable jitter if you have somewhat modern hardware.

However it should be pointed out that this is another one of those problems that isn’t as common anymore because DAC units of today are so much better than those of the past. Even objectively, jitter tends to only have any impact at super-high frequencies because those notes have the shortest wavelengths. However, what makes high-frequency sounds more susceptible to this type of error also makes them less likely to be heard: most people over the age of 20 can’t hear the sounds where jitter is most likely to occur.

What is aliasing?

 

Aliasing occurs when a set of sampled data points can be misinterpreted, when less than two samples exist per cycle. Aliasing only happens when you sample a signal (during either analog to digital conversion in an ADC or in digital downsampling) and refers to errors in signal spectrum due to sampling below the Nyquist rate.

Aliasing doesn’t happen at the output of a DAC. If there isn’t a proper lowpass reconstruction (aka interpolation) filter at the output of the DAC, then there will be images of the original signal spectrum repeating at multiples of the DAC output frequency. These image spectrums caused by high frequency mirroring can create intermodulation distortion in the audible signal, if not properly filtered out.

Listen here: Podcast: The MP3 Revolution

Since the uppermost limit of human hearing range is considered to be 20kHz (as in, 20,000 cycles per second), doubling that rate nets you 40,000 samples per second. Allowing 10% for the rolloff of the lowpass reconstruction filter gives 44kHz. That last number sound familiar? It should: 44.1kHz was chosen as the sampling rate for CDs when the standard was written in 1980.

What are bit depth and dynamic range?

The original consumer digital audio format

 

If you’ve listened to really old MP3 files or crappy MIDI music from your old consoles, you’ll probably notice that the volume doesn’t have much variation in a given music track, or that competing instruments are really really difficult to pick out if they’re all going at once. This is what low dynamic range sounds like. Dynamic range in this instance simply refers to the difference between the lowest and highest output levels available.

What governs the theoretical limits of the dynamic range of an audio file is the bit depth. Basically, every single sample (discussed above) contains information, and the more information each sample holds, the more potential output values it has. In layman’s terms, the greater the bit depth, the wider the range of possible note volumes there are. A low bit depth either at the recording stage, or in the file itself will necessarily result in low dynamic range, making many sounds incorrectly emphasized (or muted altogether). Because there’s only so many possible loudness values inside a digital file, it should follow that the lower the bit depth, the worse the file will sound. So the greater the bit depth, the better, right?

 

Well, this is where we run into the limits of human perception once again. The most common bit depth is 16, meaning: for every sample, there’s a possible 16 bits of information, or 65,536 integer values. In terms of audio, that’s a dynamic range of 96.33dB. In theory, that means that no sound less than 96dB down from peak level should get lost in the noise.

While that may not sound terribly impressive, you really need to think hard about how you listen to music. If you’re like me: that comes from headphones 99+% of the time, and you’re going to be listening to your music at a volume much lower than that. For example, I try to limit my sessions to about 75dBSPL so I don’t cook my ears prematurely. At that level, added dynamic range isn’t going to be perceptible, and anyone telling you otherwise is simply wrong. Additionally, your hearing isn’t equally-sensitive across all frequencies either, so your ears are the bottleneck here.

While I'm a super big crank when it comes to silly-ass excesses in audio tech, this is one point I'm forced to concede. However, the necessity of 24-bit files for casual listeners is dramatically overstated.

So why do so many people swear by 24-bit audio when 16-bit is just fine? Because that’s the bit depth where there theoretically shouldn’t be any problems ever for human ears. If you like to listen to recordings that are super quiet (think, orchestral music)—and you need to really crank the volume in order for everything to be heard—you need a lot more dynamic range than you would with an over-produced, too-loud pop song would in order to be heard properly. While you’d never crank your amp to produce 144dB(SPL) peaks, 24-bit encoding would allow you to approach that without the noise floor on the recording becoming an issue.

Additionally, if you record music, it’s always better to record at a high sample rate, and then downsample, instead of the other way around. That way, you avoid having a high-bitrate file with low-bitrate dynamic range, or worse: added noise. While I’m a super big crank when it comes to silly-ass excesses in audio tech, this is one point I’m forced to concede. However, the necessity of 24-bit files for casual listeners is dramatically overstated.

What’s a good bitrate?

 

While bit depth is important, what most people are familiar with in terms of bad-sounding audio is either limited bitrate, or aggressive audio data compression. Ever listen to music on YouTube, then immediately notice the difference when switching to an iTunes track or high-quality streaming service? You’re hearing a difference in data compression quality.

If you’ve made it this far, you’re probably aware that the greater the bit depth is, the more information the DAC has to convert and output at once. This is why bitrate—the speed at which your music data is decoded—is somewhat important.

320kbps is perfectly fine for most applications... and truth be told most people can't tell the difference.

So how much is enough? I usually tell people the 320kbps rate is perfectly fine for most applications (assuming you’re listening to 16-bit files). Hell, it’s what Amazon uses for its store, and truth be told most people can’t tell the difference. Some of you out there like FLAC files—and that’s fine for archival purposes—but for mobile listening? Just use a 320kbps MP3 or Opus file; audio compression has improved leaps and bounds in the last 20 years, and newer compression standards are able to do a lot more with a lot less than they used to. A low bitrate isn’t an immediate giveaway that your audio will be bad, but it’s not an encouraging sign.

If you’ve got space to spare, maybe you don’t care as much how big our files are—but smartphones generally don’t all come with 128GB standard… yet. But if you can’t tell the difference between a 320kbps MP3 and a 1400+kbps FLAC, why would you fill 45MB of space when you could get away with 15MB?