Dynamic Range Processing and Its Influence on Perceived Timing in Electronic Dance Music*
Ragnhild Brøvig-Hanssen, Bjørnar E. Sandvik, and Jon Marius Aareskjold-Drecker
KEYWORDS: groove, rhythm, microrhythm, timing, perception, analysis, sound, dynamics, sidechain compression, electronic dance music
ABSTRACT: In this article, we explore the extent to which dynamic range processing (such as compression and sidechain compression) influences our perception of a sound signal’s temporal placement in music. Because compression reshapes the sound signal’s envelope, scholars have previously noted that certain uses of sidechain compression can produce peculiar rhythmic effects. In this article, we have tried to interrogate and complicate this notion by linking a description of the workings and effects of dynamic range processing to empirical findings on the interaction between sound and perceived timing, and by analyzing multitracks and DAW project files, as well as released audio files, of selected EDM tracks. The analyses of the different EDM tracks demonstrated that sidechain compression affects the music in many possible ways, depending on the settings of the compressors’ parameters, as well as the rhythmic pattern and the sonic complexity of both the trigger signal and the sidechained signal. Dynamic range processing’s impact on groove and perceived timing indicates, in line with previous findings, that sound and timing interact in fundamental ways. Because of this interaction, then, we cannot limit ourselves to technical terms that describe how particular effects are achieved if we want to fully understand the grooves that are characteristic of EDM or other music. We must also consider how listeners experience these effects.
DOI: 10.30535/mto.26.2.3
Copyright © 2020 Society for Music Theory
[1.1] In our preparations for a separate article on electronic dance music (EDM) grooves,(1) we interviewed several esteemed Norwegian EDM producers about the parameters they considered to be crucial to achieving the desired groove and feel of their music.(2) To our surprise, they all highlighted the importance of dynamics, including the individual tracks’ relative volumes, the sounds’ varying intensity and accentuation, and the dynamics-related use of signal-processing effects such as compression and sidechain compression. In EDM, sound has always had a particularly central role, and producers within these genres have long been eager to experiment with technological processing effects and editing tools to create new sonic effects and soundscapes. Accordingly, we anticipated that the producers would be concerned with sound, including dynamics, but we found it intriguing that they regarded dynamics as a significant groove parameter. This observation became the point of departure for the present article, which analyzes how dynamics and timing interact and together shape particular groove experiences.
[1.2] As Jay Hodgson (2016) has pointed out: “Side-chain pumping, ducking, and envelope following recur constantly in modern EDM. They are as common in the genre as power chords and tapping once were in heavy metal. Yet they remain conspicuously absent from research on the genre and from research on popular music in general.” Dynamic range processing is usually intended to change the amplitude or volume of a sound or a mix, to alter the volume relations among the different sounds in a mix, to unmask interfering sounds (via sidechain compression, for example), to narrow or expand an audio signal’s dynamic range, and otherwise to create musical and aesthetic effects (such as “pumping”) in a non-traditional manner. Although dynamic range processing is acknowledged to impact the music’s sound, it is less obviously connected to the listener’s experience of the timing of that sound. In research into the micro level of rhythm, particular attention has been given to the temporal aspects of these sounds’ organization—that is, inter-onset-intervals and duration. Less focus has been devoted to the influence of non-temporal sonic features of sound on how listeners experience musical groove and perceive the music’s timing, and on how producers approach the sonic and temporal parameters as interlinked domains.
[1.3] The present article argues that the creative use of dynamics is highly interesting also from a rhythmic perspective, and that dynamics are key to understanding the aesthetics and feel of EDM, as well as other popular music genres. We start by giving a brief overview of findings from perceptual experiments that examine multidimensional perceptual processing, including the interaction between time and shape/envelope, and between time and intensity. Next, we discuss how the groove experience, including the sounds’ perceived timing, can be influenced by amplitude adjustment, compression, and sidechain compression. Regarding the latter, we distinguish between sidechain compression used to unmask elements in the mix and the creative application of it often referred to as “sidechain pumping,” which results in constant level changes at regular intervals. In the final section, we analyze how sidechain pumping is used creatively to achieve various rhythmic effects, and discuss how it may also impact the sounds’ perceived timing. All the producers we interviewed for our previous article on EDM gave us multitracks and project files for one of their music recordings, which allowed us to surgically analyze their use of processing effects and these effects’ impact on rhythm. It also allowed us to aurally and visually compare an individual track with and without (bypassed) its applied sound-processing effects. We also analyze released audio tracks by international EDM artists in order to demonstrate the broad range of possible applications of this effect. The analyses are supported by technical discussions of the workings of the relevant processing effects and our in-depth knowledge of music production and engineering, various sequencer programs’ tools and plug-ins, acoustics, and the genres of EDM.(3)
The Interaction between Sound and Timing
[2.1] Little scholarly attention has been devoted to the role of dynamics in EDM or popular music in general, and even less have been written about the interaction between dynamics and rhythm. There are, however, a few pioneering exceptions that have noted how the creative application of dynamic processing, and particularly an exaggerated use of sidechain compression, affects our groove experience. Whereas Hodgson (2011) is mostly concerned with the musical function of signal processing techniques in general, not their impact on rhythm in particular, he does note that the overt use of sidechain compression shapes the dynamic contour of sounds and can in turn transform pad sounds into a series of “rhythmic upstrokes.” Mike D’Errico discusses the rhythmic effects of sidechain compression more explicitly and observes that it can create musical complexity at the microlevel and affect our perception of the sounds’ timing: “This wave-like quality that results from the intense sidechain compression disrupts the listener’s sense of solid rhythmic pacing by melting the quantized
[2.2] In recent decades, researchers have begun to go beyond notational parameters to devote attention to microrhythmic aspects of music that are usually ignored by the notational system, arguing that these rhythmic nuances are crucial to the “feel” of the music (see, for example, Bengtsson and Gabrielsson 1983; Danielsen 2006, 2010b; Johansson 2010a; Keil 1987, 1995; and Kvifte 2004). It has further been argued that microrhythmic deviations from a perceived norm are relevant to the way in which the audience interacts with the music in terms of bodily movement and dancing (see, for example, Danielsen 2012 and Zeiner-Henriksen 2017). As Anne Danielsen (2010a, 1–2) notes, the assumption that microrhythmic deviations from a perceived norm are crucial to achieving a successful groove or musical feel and to triggering bodily movements are challenged by music that adheres to a grid-based aesthetic. EDM typically articulates its metrical grid rather than deviating from it, even to the extent that it can be entirely quantized, but it nevertheless presents a viable and absorbing groove and musical feel. Moreover, the fact that this grid-based music dominates the club scene demonstrates its manifest ability to stimulate bodily movements and dancing. Clearly, then, we must acknowledge, as Danielsen writes, “the importance of approaching rhythmic events not as entries in a virtual metric grid but as actual sounding gestures” (2012, 159). Rhythm, in short, is not an exclusively temporal feature: its sound-related features are equally relevant to how we experience the groove and perceive rhythmic timing.
[2.3] There exist some experimental studies that probe the relationship between sound and timing in auditory perception in general using musical or “quasi”-musical sounds. John W. Gordon (1987) found that neither a sound’s physical onset (signal onset) nor its attack peak are necessarily identical to what he called the “perceptual attack time” (PAT) of the sound. Later, Matthew J. Wright defined PAT as a sound’s “perceived moment of rhythmic placement” (2008, 31). The concept is thus very similar to what the speech community and more recent rhythm perception studies refer to as the “perceptual center” (P-center), which was defined by John Morton, Steve Marcus, and Clive Frankish (1976) as a speech sound’s “perceptual moment of occurrence.” According to Gordon (1987), the sound’s perceived timing (or PAT) is located somewhere between the physical onset of the waveform and its physical attack peak—that is, somewhere along the acoustical rise time of the sound. This implies that even if a sound’s onset or attack peak within a waveform representation is localized on the beat grid with milliseconds of precision, its P-center may be perceived as appearing considerably behind the beat. We must, in other words, account for discrepancies between physical onset timing and perceived timing.
[2.4] Gordon (1987, 94) also found that a sound’s P-center depends on both subject and type of trial, and its precise location is therefore hard to identify. The methodological challenges of identifying a sound’s P-center have also been explored by, among others, Villing (2010) and London et al. (2019). Danielsen (2010b) has further pointed out that a sound may not necessarily have a precise location but rather a more broadly distributed center—what she calls a “beat bin.” Other experiments have also shown that the P-center relates to a sound’s shape/envelope (Gordon 1987; Villing 2010; Vos and Rasch 1981; and Wright 2008). Generally, listeners experienced the P-center of a sound with a slow rise time as later than that of a sound with a quicker rise time. Danielsen et al. (2019) (see also London et al. 2019) found, in addition, that a sound’s duration is relevant to its P-center: the combination of fast attack and/or short duration leads to an early P-center and also low variability (among the experiment’s participants), and thus to a narrow beat bin or more precise location. The combination of slow attack and/or long duration, on the other hand, leads to a later P-center and also high variability, and thus a wider beat bin or more distributed location.
[2.5] In addition to perceptual interaction between timing and shape/envelope, there is shown to be perceptual interaction between timing and intensity. Dirk-Jan Povel and Hans Okkerman (1981) and Hasan G. Tekman (2001, 2002) found that listeners heard variations in timing as variations in intensity, which means that even if a sound is placed slightly off the grid, it is not necessarily heard as such—its metrical displacement can also be perceived as a change in intensity. On the other hand, if a sound is placed slightly off the grid and has an intensity accent, it is likely to be heard as deviating from the grid.(4) Tekman (2001) also found that sounds with intensity accents were perceived to be longer in duration than non-accentuated sounds. Overall, then, sound-related features such as intensity are probably crucial for a sound to be perceived as slightly behind or ahead of the beat; off-the-beat-ness is accentuated through a combination of temporal and sonic features.
[2.6] Given the fact that a sound’s physical onset is not the same as its P-center, rhythm can be framed productively as the interaction between physical sounds and our endogenous perceptual and cognitive mechanisms used in the structuring of those sounds (Danielsen 2006, 2010b). The findings that perceived timing is interlinked with shape/envelope and intensity will inform our following analyses of the ways in which dynamic range processing influences our experience of musical groove. The finding that a slow rise time leads to a wide beat bin and/or later P-center will be of particular relevance, as will the notion that intensity variations might be perceived as timing variations, and vice versa.
Shaping Groove through Dynamic Range Processing
[3.1] Several of the EDM producers we interviewed reported that adjusting the balance between individual tracks does affect the groove experience (if not necessarily its timing component). For example, one argued that when each individual track attains its ideal volume, the music starts “breathing” (his metaphor for how something grooves). In terms of creating a good groove, the dynamics of each element, and the dynamic relation among elements, are more important than timing that makes the music less static by supplying a “human touch”: “Turn the volume a bit down, and then raise it again
a stimulus (in a series of stimuli) which is marked for consciousness in some way. It is set off from other stimuli because of differences in duration, intensity, pitch, timbre, etc.. . . [T]he accented beat is the focal point, the nucleus of the rhythm, around which the unaccented beats are grouped and in relation to which they are heard. (1963, 8)
Because an overall volume change does not set one sound apart from others, there is no indication that overall volume changes should affect the perception of the sounds’ timing. It might nevertheless change the groove experience, since sound and groove are intrinsically related (Danielsen 2012). Accentuation variations and changes in a sound’s inner dynamic may, on the other hand, affect not only the overall groove experience but also the perception of the sound’s timing.
[3.2] An effective means of changing a sound’s inner dynamic and creating accentuation in the music is the use of dynamic range compression. A compressor is a complex volume controller that responds to and reduces the amplitude level of an audio signal that exceeds a set threshold. It does so by an amount that is determined by the selected attenuation ratio. In other words, it shortens the distance between a signal’s peaks and valleys, thus reducing the dynamic range of the signal. The compressor starts working according to a set attack time and stops working according to a set release time. By adjusting the threshold level and the ratio, as well as the attack and release times, the producer “remodels” the sound’s amplitude shape or envelope—that is, how the sound’s dynamics evolve over time. As such, the compressor can be used to increase the overall volume level of the sound without running the risk of distortion on a recording. Compression is also used creatively to shape the envelopes of sounds, which has in turn had a profound impact on popular music sound.
[3.3] Because dynamic range compression changes both the envelope of the sound and its intensity, it is likely that a compressed sound will be perceived to have a different timing than the same sound uncompressed. First of all, a compressor usually molds a sound’s attack phase, including its rise time. As discussed previously, there is a perceptual interaction between shape/envelope and timing—a sound’s envelope or shape will impact where we experience the temporal location of the sound’s P-center. If the compression shortens a sound’s attack phase, its P-center will likely be perceived as early and quite precise, especially if the sound is of short duration. If, on the other hand, the compression lengthens a sound’s attack phase, its P-center will likely be perceived as later and less defined (as a wider beat bin), especially if the sound is of long duration (see, for example, Danielsen et al. 2019, 2019). The P-center of the sound will, in other words, change depending on the length of the rise time, as well as the duration of the sound. Second, compression is often used to make a sound stick out, or to create an intensity accent—as mentioned, there is perceptual interaction between intensity and timing. For example, Tekman (2001) found that a sound with intensity accents can be perceived as longer in duration than a non-accentuated sound. However, Tekman did not clarify whether this means that the extended duration occurs at the sound’s start time (that its onset, and thus also P-center, is perceived as earlier) or at the sound’s end time (that its offset is perceived as later). Because a compressor usually narrows the duration of the rise time, we may assume that compression makes us perceive the sound to be appearing earlier than when it is uncompressed. If a sound that is already temporally accentuated, in terms of happening slightly before the beat, is also accentuated in intensity, it is very likely to be perceived as early.
Example 1. Upper and lower left: a sound’s envelope presented as uncompressed and as reshaped by the compressor’s attack time. Upper and lower right: a signal triggering the compressor to attenuate the input signal before it feeds it back in again
(click to enlarge)
[3.4] The compressor is also commonly used to “sidechain” audio signals. Sidechain compression involves a compressor that uses two input signals instead of one: a main input signal and a sidechain input signal. The latter is used to control the level of compression, so that each time this so-called trigger signal surpasses the compressor’s set threshold, the amplitude of the main signal is reduced before it returns to its previous level. When this happens relatively slowly and in a clearly audible manner, we experience the effect known as pumping. The compressor still reshapes the envelope of the main signal, but it does so in a reverse manner to how it usually operates. The release time of the compressor usually controls the end phase of a signal, but in this case it actually reshapes the rise time (see Example 1).(5)
[3.5] Sidechain compression is quite commonly used as a mixing strategy, because it is an effective way to unmask elements in the mix that operate in overlapping frequency ranges, thereby making room for all of the sounds in the mix. For example, the low-end part of the kick drum is often masked by other low-frequency sounds occurring simultaneously. Sidechain compression can solve this problem by using the kick drum as a trigger signal to attenuate the interfering sounds, dynamically ducking them until the release control of the compressor allows them to reenter the mix. In addition to preventing the muddiness caused by interfering sounds, sidechain compression can also create a dynamic and rhythmic effect by making the sound and groove less static. For example, sidechain compression can make some of the sounds, such as the kick drum, more accentuated, which affects the overall groove experience. It furthermore changes the rise time of the attenuated signal, which may in turn impact its perceived P-center, which we will discuss in more detail below.
Example 2. Upper waveform: a sustained sound signal as it appears without compression. Lower waveforms: the resulting amplitude envelopes of a trigger and input signal from the “pumping” effect, in which the input signal is deprived of its sustain and release
(click to enlarge)
[3.6] The most obvious impact associated with the overt and creative use of sidechain compression is the so-called pumping effect, which Roey Izhaki describes as the “audible unnatural level changes associated primarily with the release of a compressor” (2008, 160). The typical sidechain pumping effect is achieved by having a percussive sound (most often a kick drum) trigger compression on a sustained sound, such as a synth pad, bass synth, or other ambient sound source. The pumping effect, or “regularized rhythmic flexing,” as Hodgson (2011) describes it, occurs when the release time of the compressor is set to be relatively long, producing a sequence of volume swells in the sustained sound at the offset of each kick drum attack (see Example 2). This particular sidechain pumping effect can be heard in mainstream tracks such as “Titanium” (2011), by David Guetta featuring SIA, or in remixes of mainstream hits such as Circuit’s remix of Ke$ha’s “Blow” (2011) or Steve Aoki’s remix of Kid Cudi featuring MGMT and Ratatat’s “Pursuit of Happiness” (2010).
[3.7] Technically, when used to create a pumping effect, the compressor remakes the envelope of the synth into a distinctive temporal shape. A natural sound consists of its rise time (the time from the sounds’ onset to its attack point), attack point (the sounds’ peak), decay (the decrease from the attack level to the sustain level), sustain phase (usually the most durable sequence of the sound), and release (the decay time from the sounds’ sustain phase to its end). A sidechained signal, on the other hand, often consists exclusively of rise time and attack point, or those parts plus a sustain phase but with no release, as it is interrupted by the trigger signal (usually the kick drum). Moreover, it is the release time of the compressor that defines the rise time and the attack point of the sidechained signal, and the release time is usually relatively long when set to produce a pumping effect. The sound’s attack point will thus tend to deviate slightly from the offbeat upstroke, making the sidechained synth pad sound like a late or pushed upstroke (which corresponds well to D’Errico (2015) and Zeiner-Henriksen’s (2017) observations, mentioned in the introduction, that sidechained sounds are pushed slightly off the beat). According to the finding that the P-center shifts later as the rise time increases (Danielsen et al. 2019; Gordon 1987; 2019; Villing 2010; Vos and Rasch 1981; and Wright 2008), our perceptual system is likely to enhance our experience of the synth pad as appearing behind the beat. The sidechain signal’s relatively slow rise time can thus achieve a perceived belated microrhythmic deviation from the waveform’s physical attack point. Moreover, its P-center will likely be perceived as less defined and as distributed across a wider beat bin, due to its long rise time.
[3.8] Zeiner-Henriksen introduces the onomatopoeic term “poum-tchak” to describe house grooves characterized by the typical downstroke-upstroke pattern, whereby “poum” refers to a kick drum articulating the downstroke and “tchak” refers to a hi-hat articulating the upstroke. Together, they generate what he calls “the basic poum-tchak curve” (2010, 3). Sidechain compression used in this case, then, typically positions the perceived tchak some number of milliseconds behind the upstroke grid, depending on the set release time of the compressor and the character of the sound that is being compressed. Even when this pushed upstroke is very subtle, it is significant in relation to how we experience the groove. It is not of the magnitude that we find in some neo-soul and experimental hip-hop tracks—we do not feel “seasick,” to use Danielsen’s (2010b) metaphor, but instead diverted or engaged by a sense of destabilizing rhythmic flavor and excitement. Combining kick drum sounds that accentuate the grid with sidechained synth pads that push the upstroke slightly behind that grid, this familiar sidechain effect results in a stretched or asymmetrical curve that differs slightly from the typically rigid and symmetrical poum-tchak curve of disco and house music without sidechain pumping. At the same time, this stretched, asymmetrical curve is very stable and occurs at regular intervals, eliminating any chronic sense of unpredictability. Often, this slightly pushed upstroke is what many producers seek. For example, one of the EDM producers that we interviewed explained that he was very aware of the impact that sidechain pumping has on the perceived timing of sounds and works with this technique accordingly—that is, both to create the right swing and to generate the peculiar sound with which it is associated. In what follows, we will analyze how sidechaining is used creatively to achieve various rhythmic effects.
Pumping Grooves’ Impact on Timing Perception
[4.1] One extreme example of sidechain pumping is Porter Robinson’s “Natural Light” (2014). Here, the rise times of the track’s main elements—the bass and synth pad—are completely “choked” on every downbeat by a sidechained compressor with both the kick drum and the snare drum as trigger signals, before they are slowly released back into the mix until a new downbeat abruptly chokes them again. The sidechained signal is thus deprived of its original shape, leaving it to consist only of rise time—or, more precisely, rise time and a new attack point that also serves as its abrupt release. As a result, listeners are not allowed to relax into the sound but must follow it as it rises to its end. This peculiar shape also deprives the sound of a precise center—it is nothing, in effect, but a wide beat bin. Because of its slower tempo in relation to the aforementioned examples of “Titanium,” “Blow,” and “Pursuit of Happiness,” this track ably demonstrates sidechain pumping’s effect on the sidechained signal’s rise time. Its slower tempo reduces our “rhythmic tolerance” (Johansson 2010b), making us more aware of the nuances between the beats than a faster tempo would. The pumping of the human heart provides a helpful metaphor for describing this particular sidechain-pumping effect: while the closing of the heart valves results in a defined beat, a much more diffuse and expanding sound—reminiscent of inhaling breath—results from the opening of the heart valves that allows blood to flow throughout the body. These two sounding movements of the heart valves are likely to be heard as one rhythmic gesture rather than two. Likewise, the sidechained synth pad and bass and the kick drum in “Natural Light” might be received as only a downstroke (instead of a downstroke-upstroke)—but a pumping one.
Example 3. Transcription of the rhythmic figure played by the piano in “It Ain’t Me” (2017) by Kygo and Selena Gomez
(click to enlarge)
[4.2] When understood as the interaction between physical sounds and our perception of those sounds, rhythm becomes even more complex when the sidechained signal is not a sustained sound but rather a sound (or combination of sounds) that is part of a rhythmic figure. In “It Ain’t Me” (2017) by Kygo and Selena Gomez, part of the sidechained signal consists of a piano sound playing a simple rhythmic figure (see Example 3). The sidechain compression here is triggered by the kick drum’s four-on-the-floor quarter notes, but because the rhythmic piano figure mainly consists of notes placed between the trigger beats, its rhythmic pattern is, for the most part, retained. The contrasting effect of the piano sound on the second beat thus becomes even more pronounced: the original quick attack of the piano tone is instead “faded in” in tandem with the release of the compressor, and the sound is perceived as being closer to the sixteenth note following the beat, resulting in an experience of either a microrhythmic deviation or a sort of “phantom” (virtual) attack point. Again, thanks to the song’s relatively slow tempo, listeners are likely to be quite sensitive to the ways in which sidechain compression produces microrhythmic deviations from a perceived norm, and to the rhythmic friction between the various sounds in the mix.
Example 4. Waveform representations (and transcription) of the plucked synth appearing between 0:15 and 0:29 seconds in Seeb’s remix of Mike Posner’s “I Took a Pill in Ibiza” (2016), depicting the signal’s envelope without sidechain compression (above), and with sidechain compression (below) that is triggered by the signal depicted in the middle
(click to enlarge)
Example 5. Waveform representations of the chopped-up vocal appearing in the hook after the refrain (for example, between 0:57 and 1:35) in “I Took a Pill in Ibiza,” depicting the signal’s envelope as it appears without sidechain compression (above) and with sidechain compression (below) that is triggered by the signal depicted in the middle
(click to enlarge)
Example 6. Waveform representations (and transcription) of the bass synth appearing between 0:48 and 1:02 in “Jealous” by TRXD featuring Harper (2018), depicting the synth signal’s envelope as it appears without sidechain compression (above) and with sidechain compression (below) that is triggered by the signal depicted in the middle
(click to enlarge)
Example 7. Waveform representations of the hi-hat pattern in “Jealous” (2018) by TRXD featuring Harper, depicting the signal’s envelope as it appears without sidechain compression (above) and with sidechain compression (below) that is triggered by the signal depicted in the middle
(click to enlarge)
[4.3] Another example of a pumped rhythmic figure occurs in Seeb’s hit remix of Mike Posner’s “I Took a Pill in Ibiza” (2016). Here, the compressor is placed on a (“bus”) channel in the mix that encompasses all the musical elements with the exception of drums and effects. One of these compressed sounds is a plucked synth that plays a rhythmic pattern that, in combination with the basic four-on-the-floor kick drum pattern of the track, can be interpreted as a 4:3 cross rhythm. This in itself creates a certain rhythmic friction and pace in the track, but when the sidechain compression kicks in on each quarter note articulated by the trigger signal, the reshaping of the synth sounds’ amplitude envelopes leads to a much more accentuated and rhythmically disjointed flow. The effect is obvious when we listen to the sequence between 0:15 and 0:29 seconds and compare the waveform with and without sidechain compression (see Example 4). In this sequence, we first hear the rhythmic figure of the plucked synth with a tail of reverb but no sidechain, and then we hear it as being regularly attenuated by an inaudible trigger signal, before a kick drum enters the mix in the main hook of the song (at 0:58). The kick drum is clearly felt in the groove even at the point when it is not actually playing. The envelopes of the different events in the pattern are altered by the compressor in various ways depending on their placement in relation to the compression process. Every sound in the pattern that coincides with the trigger signal is deprived of its natural attack point and thus also of a precise temporal location. The sounds are presented as short volume swells in the immediate wake of the attenuated beat (as illustrated by the red squares in Example 4), and the decay of some of the notes that appear before the trigger signal is abruptly cut short (as illustrated by the blue square in Example 4). Moreover, the sounds that are placed between the attenuated beats (see notes in Example 4) become more audible than the ones placed on or closer to them, giving the groove a new accentuation; whereas the pattern was dynamically consistent before it was processed by the compressor, its second, third, and fourth beats now stand out as clear accents.
[4.4] A similar effect is achieved when a chopped-up vocal in the same song is sidechained (see Example 5). In this case, a large part of the pattern coincides with the trigger signal, creating a series of volume swells that completely alter the rhythmic feel of the melody. The sidechain pumping here results in either an experience of the microrhythmic delay of the signals compared to their original temporal positions or the feeling of a phantom attack point (a concealed entrance).
[4.5] In the previous examples, the trigger signal is a regular four-to-the-floor kick drum. Sometimes, however, the trigger signal plays a more intricate rhythmic pattern, as is the case in “Jealous” (2018), by TRXD featuring Harper (see Example 6). In this track, the trigger signal is much more complex than the more conventional four-on-the-floor pattern often used to trigger sidechain pumping. At first, the effect of the sidechain compression may appear subtler than in the previous example, for three reasons. First, the kick drum is always audible when the compression is applied (in contrast to the “I Took a Pill in Ibiza” example). Second, the tempo of the track is faster than the previous example (140 bpm compared to 102 bpm). Third, the release time of the compressor is relatively short. However, when one bypasses the sidechained compressors placed on elements across the mix (which we were able to do because we had access to the project file for “Jealous”), it becomes apparent how crucial the effect is to the track’s groove and timing. Most of the track’s elements are strictly quantized to the grid, and without any dynamic processing, the feel of the groove comes across as quite rigid, lacking both rhythmic accentuation and dynamic variability. The producers themselves described this particular means of creating sidechain pumping as a series of “cool bounces” in the groove, as opposed to the predictable and regular four-on-the-floor pumping. To that conventional static effect is added a series of volume swells that are crucial to giving the otherwise rigidly programmed track its dynamic feel.
[4.6] Another example from “Jealous” (2018) involves the ways in which the hi-hat cymbal sound is sidechained. The hi-hat pattern is constantly on the grid, down to the sixty-fourth-note rolls, and it has no dynamic accentuation whatsoever. Instead, the producers have applied sidechain pumping to it to introduce an interesting variation in velocity that interacts dynamically with the main rhythmic pattern of the track produced by the kick drum (see Example 7).
[4.7] An additional layer of complexity can be added by triggering the compression with an off-the-grid, or even unpredictable, rhythmic pattern—one that is itself characterized by microrhythmic deviations from the perceived norm. In addition, the attenuated signal may consist of more complex sounds, such as a sample made up of several (possibly percussive) elements, or several divergent tracks may be sidechained. Perhaps the most notable instances of sidechain compression are found in tracks where it has been applied to the whole mix, or where sidechain pumping is used in a variety of ways across the mix. When music combines rhythmic and sonic variability in the source material with a loose and off-the-grid beat in the trigger signal, the result is often a complex microrhythmic design and a dynamic, wavering aesthetic.(6) Overall, such tracks are characterized by a fluctuating and destabilizing groove, and, because the attack points of the kick drum are themselves off the grid, the fluctuations in amplitude that the kick drum triggers further augment this feeling of constantly being forced ahead of or behind the beat. When the compression is applied to the whole mix, it impacts everything in the tracks’ rich aural space and rhythmic structure. A good example of this effect is found in Teeb’s remix of Nosaj Thing’s “Caves” (2010). Its mixture of synthetic sounds, samples, percussive elements and sustained sounds, and different signal-processing effects (such as LFOs, reverb, and delay) constantly shifts as the kick drum attenuates the mix before the volume swells back up again. In “Caves,” the destabilization of the sounds’ locations at multiple layers results in what has been called a “wobbly” or “seasick” feel to the track.(7)
Conclusion
[5.1] Scholars have previously noted that certain uses of sidechain compression can produce peculiar rhythmic effects (D’Errico 2015; Hodgson 2011; and Zeiner-Henriksen 2017). In this article, we have tried to interrogate and complicate this notion by linking a description of the workings and effects of dynamic range processing to empirical findings on the interaction between sound and perceived timing, and by analyzing multitracks and DAW project files, as well as released audio files, of selected EDM tracks. The analyses of the different EDM tracks demonstrated that sidechain compression affects the music in many possible ways, depending on the settings of the compressors’ parameters, as well as the rhythmic pattern and the sonic complexity of both the trigger signal and the sidechained signal.
[5.2] By linking the analyses to empirical findings on the interaction between sound and perceived timing, we showed that the reshaping of a sound’s envelope or rise time caused by sidechain compression will likely impact where we perceive the sound’s P-center to be located, as well as whether we perceive its center as a precise point in time or as more distributed (a wider beat bin). Accentuation (another effect of sidechain compression) may, as demonstrated, furthermore impact a sound’s perceived duration. The analysis of sidechain-based rhythms has, in fact, revealed itself to be an incredibly interesting and complex area of study, which this article only briefly confronts.
[5.3] Dynamic range processing’s impact on groove and perceived timing indicates, in line with previous findings, that sound and timing interact in fundamental ways. Temporality is not the only relevant dimension when studying or experiencing groove; sound-related features of the music, including its dynamics, are also crucial. Rhythm, or groove, arises in the interaction between sound and timing, and between physical signals and human perception. Because of this interaction, then, we cannot limit ourselves to technical terms that describe how particular effects are achieved if we want to fully understand the grooves that are characteristic of EDM or other music. We must also consider how listeners experience these effects.
Ragnhild Brøvig-Hanssen
University of Oslo
RITMO: Interdisciplinary Center for Rhythm, Time and Motion
Department of Musicology
P.O. Box 1017, Blindern
0315 Oslo, Norway
ragnhild.brovig-hanssen@imv.uio.no
Bjørnar E. Sandvik
University of Oslo
RITMO: Interdisciplinary Center for Rhythm, Time and Motion
Department of Musicology
P.O. Box 1017, Blindern
0315 Oslo, Norway
bjornar.sandvik@imv.uio.no
Jon Marius Aareskjold-Drecker
University of Agder
Department of Popular Music
Postboks 422
4604 Kristiansand
jon.m.aareskjold@uia.no
Works Cited
Bengtsson, Ingmar, and Alf Gabrielsson. 1983. “Analysis and Synthesis of Musical Rhythm.” In Studies in Music Performance, ed. Johan Sundberg, 27–60. Royal Swedish Academy of Music.
Butler, Mark J. 2006. Unlocking the Groove: Rhythm, Meter, and Musical Design in Electronic Dance Music. Indiana University Press.
Brøvig-Hanssen, Ragnhild, Bjørnar E. Sandvik, Jon Marius Aareskjold-Drecker, and Anne Danielsen. Under review. “A Grid in Flux: Sound and Timing in Electronic Dance Music.”
Clarke, Eric F. 1988. “Generative Principles in Music Performance.” In Generative Processes in Music: The Psychology of Performance, Improvisation, and Composition, ed. John A. Sloboda, 1–26. Oxford University Press.
Cooper, Grosvenor, and Leonard B. Meyer. 1963. The Rhythmic Structure of Music. University of Chicago Press.
Danielsen, Anne. 2006. Presence and Pleasure: The Funk Grooves of James Brown and Parliament. Wesleyan University Press.
—————. 2010a. “Introduction: Rhythm in the Age of Digital Reproduction.” In Musical Rhythm in the Age of Digital Reproduction, ed. Anne Danielsen, 1–18. Ashgate.
—————. 2010b. “Here, There and Everywhere: Three Accounts of Pulse in D’Angelo’s ‘Left and Right.’” In Musical Rhythm in the Age of Digital Reproduction, ed. Anne Danielsen, 19–36. Ashgate.
—————. 2012. “The Sound of Crossover: Micro-Rhythm and Sonic Pleasure in Michael Jackson’s ‘Don’t Stop ’Til You Get Enough.’” Popular Music and Society 35 (2): 151–68.
Danielsen, Anne, Carl Haakon Waadeland, Henrik Gunnar Sundt, and Maria Witek. 2015. “Effects of Timing and Tempo on Sound: Timbral and Dynamic Aspects of Microrhythm.” Journal of the Acoustic Society of America 138 (4): 2301–16.
Danielsen, Anne, Kristian Nymoen, Evan Anderson, Guilherme Schmidt Câmara, Martin Torvik Langerød, Marc R. Thompson, and Justin London. 2019. “Where Is the Beat in That Note? Effects of Attack, Duration, and Frequency on the Perceived Timing of Musical and Quasi-Musical Sounds.” Journal of Experimental Psychology: Human Perception and Performance 45 (3): 402–18.
D’Errico, Mike. 2015. “Off the Grid: Instrumental Hip-Hop and Experimentalism after the Golden Age.” In The Cambridge Companion to Hip-Hop, ed. Justin A. Williams, 280–91. Cambridge University Press.
Drake, Carolyn, and Caroline Palmer. 1993. “Accent Structures in Music Performance.” Music Perception 10 (3): 343–78.
Gabrielsson, Alf. 1974. “Performance of Rhythm Patterns.” Scandinavian Journal of Psychology 15 (1): 63–72.
—————. 1999. “The Performance of Music.” In The Psychology of Music (2nd ed.), ed. Diana Deutsch, 501–602. Academic Press.
Goebl, Werner. 2001. “Melody Lead in Piano Performance: Expressive Device or Artifact?” Journal of the Acoustical Society of America 110 (1): 563–72.
Gordon, John W. 1987. “The Perceptual Attack Time of Musical Tones.” Journal of the Acoustical Society of America 82 (1): 88–105.
Hodgson, Jay. 2011. “Lateral Dynamics Processing in Experimental Hip-Hop: Flying Lotus, Madlib, Oh No, J-Dilla and Prefuse 73.” Journal on the Art of Record Production 5. https://www.arpjournal.com/asarpwp/lateral-dynamics-processing-in-experimental-hip-hop-flying-lotus-madlib-oh-no-j-dilla-and-prefuse-73/.
Hodgson, Jay, with Steve MacLeod. 2016. Representing Sound: Notes on the Ontology of Recorded Musical Communications. Wilfrid Laurier University Press. https://www.wlupress.wlu.ca/Books/R/Representing-Sound.
Izhaki, Roey. 2008. Mixing Audio: Concepts, Practices, and Tools. Focal Press.
Johansson, Mats. 2010a. “Rhythm into Style: Studying Asymmetrical Grooves in Norwegian Folk Music.” PhD diss., University of Oslo.
—————. 2010b. “The Concept of Rhythmic Tolerance: Examining Flexible Grooves in Scandinavian Folk Fiddling.” In Musical Rhythm in the Age of Digital Reproduction, ed. Anne Danielsen, 69–84. Ashgate.
Johansson, Mats, Anne Danielsen, Ragnhild Brøvig-Hanssen, Bjørnar E. Sandvik, and Kjetil K. Bøhler. Under review. “Shaping Rhythm: Timing and Sound in Five Rhythmic Genres.” Journal of New Music Research.
Keil, Charles. 1987. “Participatory Discrepancies and the Power of Music.” Cultural Anthropology 2 (3): 275–83.
—————. 1995. “The Theory of Participatory Discrepancies: A Progress Report.” Ethnomusicology 39 (1): 1–19.
Kvifte, Tellef. 2004. “Description of Grooves and Syntax/Process Dialectics.” Studia Musicologica Norvegica 30: 54–77.
London, Justin, Kristian Nymoen, Martin Torvik Langerød, Marc Richard Thompson, David Løberg Code, and Anne Danielsen. 2019. “A Comparison of Methods for Investigating the Perceptual Center of Musical Sounds.” Attention, Perception, and Psychophysics 81: 2088–101.
Morton, John, Steve Marcus, and Clive Frankish. 1976. “Perceptual Centers (P-Centers).” Psychological Review 85 (5): 405–8.
Palmer, Caroline. 1996. “On the Assignment of Structure in Music Performance.” Music Perception 14 (1): 23–56.
Povel, Dirk-Jan, and Hans Okkerman. 1981. “Accents in Equitone Sequences.” Perception and Psychophysics 30: 565–72.
Repp, Bruno H. 1996. “Patterns of Note Onset Asynchronies in Expressive Piano Performance.” Journal of the Acoustic Society of America 100 (6): 3917–32.
Taylor, Timothy D. 2001. Strange Sounds: Music, Technology, and Culture. Routledge.
Tekman, Hasan G. 2001. “Accenting and Detection of Timing Variations in Tone Sequences: Different Kinds of Accents Have Different Effects.” Attention, Perception, and Psychophysics 63 (3): 514–23.
—————. 2002. “Perceptual Integration of Timing and Intensity Variations in the Perception of Musical Accents.” Journal of General Psychology 129 (2): 181–91.
Villing, Rudi. 2010. “Hearing the Moment: Measures and Models of the Perceptual Centre.” PhD diss., National University of Ireland, Maynooth.
Vos, Joos, and Rudolf Rasch. 1981. “The Perceptual Onset of Musical Tones.” Perception and Psychophysics 29 (4): 323–35.
Wright, Matthew. 2008. “The Shape of an Instant: Measuring and Modelling Perceptual Attack Time with Probability Density Functions.” PhD diss., Stanford University.
Zeiner-Henriksen, Hans T. 2010. “The ‘PoumTchak’ Pattern: Correspondences between Rhythm, Sound, and Movement in Electronic Dance Music.” PhD diss., University of Oslo.
—————. 2017. “Sound Modulations and Pulse Perception.” Paper presented at the TIME seminar at the University of Oslo, September 23, 2017.
Discography
Discography
David Guetta feat. Sia. 2011. “Titanum.” Nothing but the Beat. What A Music/Virgin/EMI.
Ke$ha. 2011. “Blow” (Circuit’s remix). Blow [Explicit] (EP). RCA Records Label.
Kid Cudi feat. MGMT and Ratatat. 2010. “Pursuit of Happiness” (Steve Aoki remix [explicit]). Pursuit of Happiness [Explicit]. Universal Motown Records.
Kygo and Selena Gomez. 2017. “It Ain’t Me.” Single. Interscope/Sony/Ultra.
Mike Posner. 2016. “I Took a Pill in Ibiza” (Seeb remix). Island/Monster Mountain, Llc/Universal.
Nosaj Thing. 2010. “Caves” (Teeb remix). Drift Remixed. Timetable Records.
Porter Robinson. 2014. “Natural Light.” Worlds. Astralwerks.
TRXD feat. Harper. 2018. “Jealous.” Single. Warner Music.
Footnotes
* The authors want to thank Anne Danielsen, principal investigator of the research project TIME: Timing and Sound in Musical Microrhythm (of which the authors have been part), who provided helpful comments on an earlier draft of this article, and Nils A. Nadeau, who copyedited the manuscript. We also thank the members of the TIME project, Hans T. Zeiner-Henriksen, and the EDM producers whom we interviewed for their stimulating discussions.
Return to text
1. In this article, we use the term electronic dance music (EDM) in its broadest sense to encompass traditional techno and house music, trance music, so-called intelligent dance music (IDM), urban music with electronic sound signatures, and electro-pop crossover music. EDM is, in other words, an umbrella term for several subcategories of stylistic differences, which is how it has been used by scholars including Butler (2006) and Taylor (2001). The Norwegian EDM producers/production teams that the authors interviewed included Per Martinsen (Mental Overdrive), Charlotte Bendiks, Knut Petter Sævik (Mungolian Jet Set), Espen Berg (Seeb), and Truls Dyrstad and David Atarodiyan (TRXD). For the main article resulting from these interviews, see Brøvig-Hanssen et al. Under Review. See also Johansson et al. Under Review.
Return to text
2. Ragnhild Brøvig-Hanssen and Bjørnar E. Sandvik wrote the text, and Jon Marius Aareskjold-Drecker contributed to the content with his professional producer’s insight. Brøvig-Hanssen functioned as the editor of the article, Bjørnar E. Sandvik made the illustrations in the examples, and all three co-authors conducted the interviews with the EDM producers. The work was partially supported by the Research Council of Norway through its Centers of Excellence scheme, project number 262762, and the TIME project, grant number 249817.
Return to text
3. The authors all have music production as their primary research area and extensive hands-on experience with music production. In particular, Jon Marius Aareskjold-Drecker has collaborated extensively with the US-based Norwegian production teams Stargate and Espionage, working with productions for artists such as Beyoncé, Train, and Rihanna, and as an engineer, producer, and mixer, he has worked on a long range of Grammy-winning productions within pop, rock, electronic music, urban, and jazz genres.
Return to text
4. This evidence for a perceptual interaction between perceived timing and perceived intensity is supported by research into music performance. A number of experiments have indicated that pianists play the principal melodic voice in a polyphonic performance both louder and earlier than the other voices (see, for example, Goebl 2001; Palmer 1996; and Repp 1996). Concerning the production of accents, researchers have likewise observed a consistent relationship between duration and intensity—namely, that accented beats tend to be lengthened in performance (see, for example, Clarke 1988; Drake and Palmer 1993; and Gabrielsson 1974, 1999). An experiment conducted by Danielsen et al. (2015) also pointed to an intimate relationship between intensity and timing in drum performance. They found that drummers who want to alter the timing in the direction of a laid-back beat tend to increase their strokes’ sound-pressure level.
Return to text
5. The amplitude envelope of sounds can also be manipulated in other ways to achieve an effect similar to sidechain compression—for example, by manually drawing the envelope shape or using an LFO plug-in. For the purposes of our investigation, however, the means by which this particular envelope manipulation is achieved are less interesting than the perceptual impacts it has. Moreover, all our informants used sidechain compression and nothing else to achieve the ducking or pumping effect, and we will therefore only refer to sidechain compression.
Return to text
6. D’Errico (2015) and Hodgson (2011) provide good examples of this particular effect of sidechain pumping.
Return to text
7. See, for example, D’Errico (2015) and Danielsen (2010b).
Return to text
Copyright Statement
Copyright © 2020 by the Society for Music Theory. All rights reserved.
[1] Copyrights for individual items published in Music Theory Online (MTO) are held by their authors. Items appearing in MTO may be saved and stored in electronic or paper form, and may be shared among individuals for purposes of scholarly research or discussion, but may not be republished in any form, electronic or print, without prior, written permission from the author(s), and advance notification of the editors of MTO.
[2] Any redistributed form of items published in MTO must include the following information in a form appropriate to the medium in which the items are to appear:
This item appeared in Music Theory Online in [VOLUME #, ISSUE #] on [DAY/MONTH/YEAR]. It was authored by [FULL NAME, EMAIL ADDRESS], with whose written permission it is reprinted here.
[3] Libraries may archive issues of MTO in electronic or paper form for public access so long as each issue is stored in its entirety, and no access fee is charged. Exceptions to these requirements must be approved in writing by the editors of MTO, who will act in accordance with the decisions of the Society for Music Theory.
This document and all portions thereof are protected by U.S. and international copyright laws. Material contained herein may be copied and/or distributed for research purposes only.
Prepared by Fred Hosken, Editorial Assistant
Number of visits:
5416