Psychoacoustics
MODULE 6

READINGS
(No textbook reading)

HOMEWORK


Lecture notes

 

t o p i c s
 Perceptual attributes of acoustic waves - Timbre
 Cognitive aspects of timbre
 Beating & roughness - consonance / dissonance
 Time and pitch, loudness, & timbre
         Interlude: Spectral music (optional)

 

 


 

    

Perceptual attributes of acoustic waves - Timbre

 

The Importance of Studying Timbre (tone color)

      Timbre...

Approaches to the examination of timbre

  • Acoustical/Psychoacoustical: examining timbre in terms of its physical (signal and spectral features of sound waves) and physiological (function of the ear) correlates.
  • Semantic/Cognitive/Aesthetic: examining timbre in terms of its function, meaning, value, and affective (i.e. emotional) qualities.
     

Timbre and Spectrum

Definition: Two sounds of the same pitch and loudness may have recognizably different qualities: for instance the sounds of string vs. reed instruments in the orchestra. These distinguishing qualities of sound are collectively referred to as timbre.
More specifically, the ANSI (1994) definition of timbre, as refined by Plomp, describes it as "that attribute of sensation in terms of which a listener can judge that two steady complex tones having the same loudness, pitch and duration are dissimilar" (Plomp, 1970: 398.)
Timbre is a perceptual attribute of sound waves related mainly to a complex wave's spectral distribution (for a reminder see the discussion on spectra in the Module 2 lecture notes).

Example 1 (.wav file): A sustained tone played by a Bb soprano clarinet is followed by the same tone presented by gradually increasing and then decreasing the number of spectral components (from the lowest to the highest in frequency).
Example 2 (.wav file): A 220 sine tone of amplitude A followed by 7 more tones with increasingly more harmonic spectral components (i.e. 2f, 3f, ... 8f) at amplitudes equal to A/n (n : number of harmonic component).

If we consider sinusoidal waves a limit case of complex waves (i.e. complex waves with a single spectral component), we can expect that changing the frequency of a sinusoidal wave, we will change not only its pitch but also its timbre (i.e. sound quality). Perceptual experiments have confirmed this.  Listen to this example (ascending sine-tone glide: 50-5000Hz). Would you agree that changing frequency changes both the timbre and pitch of the tone?

It is difficult to agree on a single, all-encompassing definition of timbre, fact confirmed by the long list of timbre definitions available in the literature (see an instructive partial list of definitions published up to 1997, compiled by G. Sandell).

Bregman (1990) points out the problems associated with traditional definitions of timbre that approach it as the aspect that perceptually differentiates sounds with the same pitch, loudness and duration:

“[The ANSI definition] is, of course, no definition at all. For example, it implies that there are some sounds for which we cannot decide whether they possess the quality of timbre or not. In order for the definition to apply, two sounds need to be able to be presented at the same pitch, but there are some sounds, such as the scarping of a shovel in a pile of gravel, that have no pitch at all. We obviously have a problem: Either we must assert that only sounds with pitch can have timbre, meaning that we cannot discuss the timbre of a tambourine or of the musical sounds of many African cultures, or there is something terribly wrong with the definition." (Bregman, 1990: 92).

The definition by Hajda et al. (1997), below, escapes these pitfalls.

"Based on research findings and [previous] definitions... it is clear that timbre has two principle constituents: (1) It 'conveys the identity of the instrument that produced it' (Butler, 1992, p. 238), and (2) It is representable by a palette or family of palettes (see Martens, 1985) in which tones from different sources can be related along perceptual dimensions. The first constituent is nominal or categorical in nature: the clarinet has a characteristic to its sound, regardless of the pitch, loudness, etc. The second constituent is a hybrid of categorical and ordinal organization: the clarinet is not nasal and is therefore differentiated from the oboe, which is nasal." (Hajda et al. 1997: 302).

The signal/spectral parameters related to timbral similarity/difference, as identified in a series of studies (Plomp, 1970 & 1976, Grey & Gordon, 1978; Kendall & Carterette studies in Hajda et all, 1997; etc.), have been described as:

  1. signal time variance (envelope)

  2. degree of attack and decay synchrony of the sine components;

  3. presence or absence of high-frequency inharmonic energy in the attack portion of a signal;

  4. spectral energy distribution (frequency, amplitude and phase values of the sine components of a complex signal - may change with changes in intensity and register, even for a given instrument); and

  5. spectral energy distribution time-variance (spectral flux or "jitter").

Spectral energy distribution can itself be described in terms of the following acoustic parameters (deduced/reduced from Grey, 1977; Kendall et al., 1999; Lakatos, 2000; McAdams et al., 1995):

  1. spectral centroid (center of amplitude-weighted frequency distribution);

  2. spectral bandwidth (spread of frequency distribution);

  3. spectral density (total energy per critical band); and

  4. spectral inharmonicity (departure from integer multiple relationship among frequency components relative to some fundamental component)

Spectral centroid (manifested perceptually as a sound’s degree of nasality-brightness / acuteness-dullness) has been well-defined in the literature (e.g. Kendall and Carterette, 1996; Kendall et al., 1999; Marozeau et al., 2003) and is discussed briefly next. All other parameters on the list are captured and represented in models calculating/quantifying the perception of auditory roughness (also discussed below).

Helmholtz was the first scholar to link timbre (a perceptual aspect of sound waves) to spectral distribution (a physical aspect of sound waves). He specifically focused on the spectral distribution of the steady state portion of sound signals (defined below). This approach overlooked several acoustical aspects of sound signals, such as attack (onset transients) and signal/spectral time variance, both of which have been proven important to timbre perception.
 

Nasality - Brightness

Timbre is a multidimensional perceptual attribute of sound, with spectral distribution differences manifesting themselves perceptually in several ways (nasality, brightness, roughness, etc).
Kendall et al. (1999) have argued that the degree of a sound’s “nasality” constitutes the primary dimension of timbre. They link “nasality” directly to spectral centroid, a measure of the energy distribution in the spectrum of a complex signal (within a given time window).

[Kendall, R., Carterette, E., & Hajda, J. (1999). Perceptual and acoustical features of natural and synthetic orchestral instrument tones. Music Perception, 16(3), 327-364]

In general, high centroid values correspond to spectra with more high-frequency energy and to 'nasal' sounds, while low centroid values correspond to spectra with more low-frequency energy and to 'acute' or 'dull'  sounds.  Qualitatively, spectral centroid can be likened to a spectrum's "center of gravity" or "titter-totter fulcrum," with amplitude values representing "weights" and  frequency values representing the "position" of each weight on the titter-totter.

The formula, below, relates spectral centroid to the frequency f and amplitude A values of a complex signal's spectral components, for a total of n components:

Centroid: Σ1-n fn*An / f11-nAn   (formula explained in class).

Including f1 in the formula's denominator results in centroid values that are independent of fundamental frequency and, therefore, musical register.

Listen to pair of harmonic complex tones with the same number of components (7), the same fundamental frequency (220Hz), but different centroid values. (.wav file). The first tone has most of its energy in the low components (low centroid value), while the second has most of its energy in the high components (high centroid value).

The didjeridu is an example of an instrument whose performance practice and aesthetic qualities rely heavily on spectral centroid manipulation.
Listen to "Green Frog", a Wangga song from Arnhem Land, Northern Australia  (.rm file).

Brightness is the second most important dimension of timbre, closely correlating with a register-dependent centroid (i.e. by removing f1 from the centroid formula's denominator), adjusted to account for perceptual data examining directly the dependence of brightness on fundamental frequency.  [Marozeau, J. and de Cheveigne, A. (2007). The effect of fundamental frequency on the brightness dimension of timbre. J. Acoust. Soc. Am., 121(1): 383-387].

Roughness

Roughness is the next most important perceptual dimension of timbre and is relates to a complex tone's spectral distribution in a more complex manner than nasality and brightness relate to spectral centroid. More details shortly.

Formants

The resonant characteristics of an instrument (voice included) result in the enhancement of certain spectral regions of the sounds produced.  When these enhanced spectral regions remain the same regardless of fundamental frequency they are called formants and contribute to the identification of instrumental timbres and vocal sounds.  More specifically, formants appear to be responsible for the recognizable differences between various vowel sounds and have been used successfully in speech recognition and synthesis applications.

Combination (subjective) tones

The term Combination Tones was introduced by Helmholtz to describe tones that can be traced not in a vibrating source but in the combination of two or more waves originating in vibrating sources. Combination tones are the products of wave interference and have physical, physiological, neurological, and cognitive origins.

A specific combination tone, the difference tone, is one of the perceptual manifestations of amplitude fluctuation.
The difference tone is a tone with pitch corresponding to the frequency |f1f2| (i.e. amplitude fluctuation rate) heard when two tones with fundamental frequencies f1 and f2 are played together. Experimental evidence indicates that the difference tone can be partially traced to the nonlinear response of the inner ear (cochlea).

In this example, there are 4 successive tones:
(a) 700Hz, (b) 1000Hz, (c) 700+1000Hz. and (d) 300Hz.
When listening to tone (c) at a high level, the difference tone (300Hz) can be heard in the background. Tone (d) is presented as a reference.

Whether created in the physical (e.g. in the sound source or the propagation medium) or physiological frame of reference (e.g. in the ear), combination tones belong to the spectral distribution of a signal, as this is manifested in basilar membrane disturbance patterns.

 


 

Categorical versus Continuous Timbre Perception

Spectral distribution is not a 100% reliable acoustical correlate of timbre. 
The same instrument may produce notes with widely differing spectral distributions, when performed at different intensity levels (Butler, 1992: 72) or registers, but will most likely retain its timbral identity. 

For example, the signals of low, middle, and high pitched notes on a piano will have very different spectral distributions but will, in general, continue to be identified as belonging to a single instrumental timbre category, that of a piano, suggesting that timbre perception may be categorical

Accordingly, keeping the same spectral distribution across the playing range of a single instrument results in tones that cannot convincingly convey the instrument's identity.
For example, listen to the sound of a violin playing C4.
Now listen to the same sound transposed up to C5 or transposed down to C3, changing the pitch (register) while keeping the spectral distribution the same. Do the two transposed tones convincingly convey the instrument's identity as being that of a violin? Why / why not?

At the same time, several studies that gradually morph signals from one instrumental spectral distribution to another have shown that timbre perception is continuous rather than categorical (i.e. perceptually, the timbre does not abruptly move from the first instrument to the second at some fixed point in the morphing stage but appears to transform perceptually in a gradual manner, as does the physical stimulus).  In this sound morphing example, a C4 tone played on a French Horn gradually (in 10 steps) morphs into a C4 tone played on a Bb Clarinet (after Butler, 1992: 134).  Does the transition from French Horn to Clarinet seem gradual or abrupt? 
 
Based on such observations, the previously discussed ability to group together the widely different spectral distributions of different notes on the violin or piano under a single timbre category (that of the violin or the piano) must be based on higher level cognitive processing, guided by our experience with an instrument's sound throughout its pitch range.  Studies that show a larger decline in timbre identification with changes in register for unfamiliar versus familiar instrument sounds confirm this claim.

 


 

Timbre and Time-Variance of Sound Waves (signal envelope, spectral flux, etc.)

The spectrum of a complex signal describes the amount of energy in each of the signal's frequency components (partials) but does not describe how the amount of energy in a signal or spectrum changes with time, a change that also influences timbre. Signal time-variance can be represented through a signal's envelope and spectral time-variance (spectral flux) can be represented through time-variant spectra, sonograms, or individual component amplitude/frequency envelopes.

Signal Envelope

Attack: The portion of the envelope tracing the development of a sound signal towards its maximum amplitude. It represents how energy is built up in a vibrating system and, in music, can be manipulated through instrument generator excitation methods (bowing, plucking, striking with a hard or soft mallet, etc.).

Steady state: The portion of the envelope during which the amplitude of the signal remains fairly constant. It is the result of continuous supply of energy in a vibrating system and can be manipulated through performance techniques such as vibrato, muting, damping, bowing pressure, driver excitation location, etc.

Decay: The portion of the envelope that traces the drop in amplitude of a sound signal from its maximum value to zero. Decay occurs when energy stops being supplied to a vibrating system, and represents how energy stored in a system eventually dissipates. It depends on the resonance and feedback characteristics of the vibrating system (sharply tuned resonators decay faster than broadband resonators) and on the environment within which the system vibrates (sound decays faster in rooms with absorbent boundary surfaces, than in rooms with reflective boundary surfaces).

[re-read the section on Musical Instruments (Module 2 notes)]

In this graphic example of a signal envelope's three sections, note that the indicated boundaries between attack and steady state or steady state and decay are approximate and, in most cases, not clear-cut (especially the boundary between attack and steady state).  In addition, the so-called "steady state" represents a portion of the signal during which several of its acoustical aspects are changing (i.e. this state is not truly "steady").

The significance of envelope to timbre can be demonstrated by playing a sound backwards (the signal's evolution through time changes, while its average spectral distribution remains the same). Click below for the three examples played in class (Houtsma et al., 1987).
Example 1 - Example 2 - Example 3
 
Based on envelope shapes, we can classify signals in two, very broad, categories:
i) Continuous signals, where most of the energy is contained in the steady state portion of the of the envelope (e.g. signal of a bowed violin string)
ii) Impulse signals, where the envelope has no steady state portion and the attack portion is much shorter and steeper than the decay portion (e.g. signal of a struck, marimba bar).

The attack portion of the signal envelope (portion that contains a signal's onset transients) contributes to the timbre of any signal but significantly more so to the timbre of impulse signals.
In this example, three orchestral instruments are presented with the attack portion of their signal removed. Can you recognize the instruments? [key at the bottom of the page]

Spectral time-variance
(Time-variant spectra;  Sonograms;  Individual component amplitude/frequency envelopes)

In addition to signal envelope, time variant characteristics of a signal that influence its timbre can be expressed in terms of spectral time-variance. Spectral time-variance is manifested as changes in the frequency and amplitude of a complex tone's components with time, and can be represented in the form of time-variant spectra (Butler, 1992: 73), sonograms (time on the x axis, frequency on the y axis and intensity in different color shades - Figure 1, below), or amplitude and frequency envelopes of the individual components (displaying time on the x axis and the amplitude or frequency of a component on the y axis - from www.ee.columbia.edu/~ronw/dsp/).
Spectral time-variance contributes to the "naturalness" of a sound, while deviations from harmonic spectra and the way these interact when several instruments perform together in unison change the timbre of the resulting sound and contribute to what is perceived as a 'chorus effect'.



 Figure 1: Sonograms from (a) warbler, (b) whale, (c) flute, (d) singer singing a steady tone.
From Music without Borders by Susan Milius.
 

 


 

Pierre Schaeffer's Dynamic, Melodic, and Harmonic Planes

In an alternative approach by composer and music theorist Pierre Schaeffer, all necessary aspects of the acoustic correlates to timbre discussed (spectral distribution and signal/spectral time-variance) can be described in terms of only three dimensions/planes: a) dynamic plane, b) melodic plane, and c) harmonic plane.
View the slides on Schaeffer's timbre theory for the slides presented in class.
Click here for a printable copy of the slides (4 slides per page - 1 page), also displayed below.

Timbre and Basilar Membrane Disturbance Patterns

Performance techniques, resonant and feedback characteristics of a vibrating system, aural harmonics, other subjective/combination tones (discussed in the context of the ear's nonlinearity and above), and the phenomenon of masking (discussed in the contexts of the ear's nonlinearity and of loudness), all influence the timbre of a sound by changing the sound's effective (i.e. reaching the inner ear) spectral composition and temporal profile as well as its signal envelope shape.

Based on all the above considerations, at one level, timbre appears to have a correlate in the disturbance pattern of the Basilar Membrane and the way it changes with time. Such an approach can account for timbral similarities/differences due to spectral distribution, register, signal and spectral time variance, formants, and combination tones.  Associating BM disturbance patterns to timbre identification and discrimination has parallels in associating it to pitch identification/discrimination or to total loudness.  This is consistent with the specific observation that relative and quasi-absolute pitch judgments are facilitated by timbral cues and with the general observation that all perceptual and physical attributes of sound waves are, at some level, interdependent.

 


 

Cognitive aspects of timbre

 

Multidimensionality of timbre

As already discussed, the multidimensional nature of timbre makes it difficult to both define and quantify. Several studies have attempted to define timbre based only on a tone's steady-state spectral characteristics (e.g. Helmholtz, 1875; Slawson, 1985) or time-variance information (e.g. Balzano, 1986) that, in some cases, includes spectral jitter (i.e. micro-variations in amplitude and frequency spectral envelopes of individual components; e.g. Lo, 1987).

Grey (1975) revealed three primary physical dimensions along which timbral judgments are made, based on timbral similarity experiments:
a) narrow vs. wide spectra;
b) coherent vs. independent spectra; and
c) low vs. high centroid attack-spectra.
Instrument identification experiments support the timbral clustering in Grey's (1975) 3D plot (in Butler, 1992: 132), but reveal asymmetries in identification confusion (e.g. the bassoon is confused for a French horn and the saxophone is confused for an English horn but not the other way around).

More complicated experimental studies (e.g. Grey and Moorer, 1977) that involve melodic and harmonic contexts, suggest that perceptual strategies for timbral recognition and discrimination are varied and depend on:
a) whether spectral or temporal characteristics of a tone are more pronounced (e.g. tones corresponding to continuous versus impulse signals) and
b) attention shifts and musical/sonic context.

Musical context and timbre

A series of studies by Kendall and colleagues (e.g. Kendall, 1986) provide further support to Grey's assertion that the perception of timbre also depends on musical context. Different strategies are being employed depending on the types of tones in question, with time-variant steady-state information and the way it modulates in realistic musical contexts providing the most salient timbral cues. 
As noted by some researchers (e.g. Risset and Wessel, 1982), the evolution of physical parameters of tones during a musical phrase can obscure the importance of parameters that are essential to timbral recognition of isolated tones, pointing towards the significance of musical time.

Review the slides presented in class on timbre studies by Schouten (1968), Grey (1977 - slightly different interpretation of the 1975 results, above), Kendall & Carterette (1991-1994), Huron (2005), and more.
Click here for a printable copy (6 slides per page - 3 pages), also displayed below.

 


 

 

Beating & roughness - consonance / dissonance

Reminders
interval: pitch-height distance between two tones
harmonic interval: interval between two tones played simultaneously
melodic interval: interval between two tones played sequentially)

Consonant intervals
[non-discordant, pleasant, "smooth"]
a) Unison: 0 semitones difference between interval notes (maximally consonant)
b) Octave: 12 semitones
c) Perfect fifth: 7 semitones
d) Perfect fourth: 5 semitones
....

Dissonant intervals
[discordant, unpleasant, "rough"]
a) Minor second: 1 semitone difference between interval notes (maximally dissonant)
b) Major second: 2 semitones
c) Augmented fourth: 6 semitones
d) Major seventh: 11 semitones
....

The degree of "smoothness," blending, or "sensory consonance" (see below) of a given harmonic interval has been linked to the degree of basilar membrane disturbance-pattern matching between the interval tones and, consequently, the degree of simplicity in the combined disturbance pattern for both tones.  The above figure includes schematic diagrams of idealized disturbance patterns corresponding to the low-frequency harmonics of tones in octave (A), fifth (B), and third (C) harmonic intervals.

Sensory Consonance: Term referring to the perceptual 'smoothness' of a harmonic interval. The further apart on the basilar membrane the resonance regions for the components of the two notes, the less 'rough' the resulting sound and the more consonant (smooth) the interval.
Sensory Dissonance: Term referring to the perceptual 'roughness' of a harmonic interval. It is the result of the interaction among interval-note components with resonance regions along the basilar membrane that are less than a critical band apart.

The degree of beating and roughness of a given harmonic interval is directly related to the interaction of the frequency components of the interval notes within the ear and is responsible for the interval's degree of sensory consonance/dissonance. The term "sensory consonance" therefore refers to consonance understood specifically as absence of the sensation of auditory roughness.

Read the description of the relationship among beating, roughness, spectral distribution, and critical bands, presented in class.
Listen to a comparison between the roughness and beating sensations. As the frequency difference between the tones in the interval gradually narrows, the roughness sensation gradually gives way to the beating sensation.  
 

The general/musical concepts of consonance and dissonance depend on many variables, additional to sensory consonance/dissonance.  Within many musical traditions, melodic/harmonic development is often based on (musical) consonance/dissonance contrasts, and musical structures are created through a (sometimes periodic) back and forth move between (musical) consonance and dissonance.
One can think of such moves as outlining a musical piece's consonance/dissonance 'contour'
The beating and roughness sensations are directly related to the degree of sensory (not musical) consonance/dissonance of harmonic intervals. However they are not directly related to the sensory or musical consonance/dissonance of melodic intervals.
 

It is important to note that, although the degree of roughness/beating of a sound depends on clear-cut physical/physiological considerations, applicable to all cultures, how "pleasant" or musically consonant a given degree of roughness is judged to be is culturally defined, with no universally "correct" judgment. 

The concepts of consonance and dissonance have been approached in our class specifically and narrowly from within the physical (sound wave properties) and physiological (ear properties) frames of reference. The approach applies to acoustic or sensory consonance/dissonance, determined by the extent of beating/roughness generated from the interaction of the different frequency components within a complex spectrum, and is not addressing important evaluative or general contextual musical issues.

In other words it does not address the fact that, what is considered musically consonant (acceptable, pleasing, correct) or dissonant (unacceptable, disturbing, wrong) depends on melodic, harmonic, rhythmic, and dynamic contexts, and may change:
    a) with time (historical context),
    b) with tradition (cultural context), or even
    c) within a single tradition, style, or piece of music (musical context).
 
As it can be argued for all aspects of musical communication, the concepts of musical consonance and dissonance involve types of 'knowing' (explicit/implicit rules, schemata, categories, etc.) that go beyond physics or physiology.

"Whether one combination [of tones] is rougher or smoother than another depends solely on the anatomical structure of the ear, and has nothing to do with psychological motives. But what degree of roughness a hearer is inclined to … as a means of musical expression depends on taste and habit; hence the boundary between consonances and dissonances has frequently changed … and will still further change… " (Helmholtz, 1875.)

Within the Western Art musical tradition there is a strong link between roughness and annoyance, manifested in the assumption that rough sounds are considered inherently bad or unpleasant and are therefore to be avoided.
Instrument construction and performance practices outside the Western art musical tradition, however, indicate that the sensation of roughness can be an important factor in the production of musical sound. Manipulating the roughness parameters helps create a buzzing or rattling sonic canvas that becomes the backdrop for further musical elaboration. It permits the creation of timbral or even rhythmic variations (through changes among roughness degrees), contributing to a musical tradition’s menu of expressive tools.

Watch the Mijwiz and Ganga video examples presented in class.

 


 

Time and pitch, loudness, & timbre

 

As discussed in previous weeks, there is a duration threshold for pitch (~10-60ms, depending on frequency and intensity) below which sounds lose their pitch identity, and a duration threshold for loudness (~200ms for sine signals and ~400ms for broadband signals) below which loudness appears to increase with increase in duration, even if the sound intensity level remains fixed. 

It has also been shown that there is a time separation threshold  below which two signals separated by a time delay sound as one (~2-50ms depending on the spectra of the two signals; e.g. Hirsch, 1959).  In this case, introduction of the second, delayed signal has an effect on the original signal's loudness (the loudness of the original signal appears to increase) and timbre (the original signal's attack portion becomes less sharply defined, resulting in a timbre with a rather 'blurry' onset).

In this example, you will hear 3 tones: (i) a 600Hz complex tone, (ii) two 600Hz complex tones separated by 30ms, and (iii) two 600Hz complex tones separated by 150ms.
The introduction of the second tone in (ii) is perceived as an increase in loudness and a change in the attack of the first tone, while the introduction of the second tone in (iii) results in the perception of two tones.  

This blending of two tones into one and the influence of delay on loudness and timbre, for delays below the time separation threshold, is related to forward masking (Module 3b). Both, duration and time separation thresholds are linked to the mechanical and electro-chemical latency of the auditory system. 

Previous experience and context can often override such psycho-physiological limits, allowing listeners, for example, to make pitch, loudness, and timbre judgments for tones with durations below the suggested thresholds.
Listen to a melody performed using 7 notes shortened to clicks (2 signal cycles per note). Stripped from context, this melody is unrecognizable, to most listeners. If listeners are told that this is the opening line of "......(look at the bottom of the page)" they are able to hear the intended pitch contour.  After listeners have been primed to listen to this tune, they continue to hear it even if the "notes" represented by each click follow a random pitch pattern.

 


 

Interlude: Spectral music  (optional)

 

"What sparked that initial jolt of attraction to musical composition in me when I was little was not melody or structure, it was something about the sound of music--sound as opposed to what you're doing with the sound, like the crunchiness of a Bartók Quartet. It was timbre." J. Fineberg.

Penderecki is one of the few composers that based entire works on natural timbre manipulation. Electronic timbre manipulation was prominent in the works of Varése and Stockhausen.
Penderecki's Threnody for 52 strings can be seen as a set of variations upon a harmonic (i.e. simultaneous) tone cluster. The work ends with a 30-second diminuendo from fff to silence, where all 52 strings sustain a single note each at 1/4-tone intervals, producing beating/roughness timbral effects that gradually transform into fading white noise.
Listen to Penderecki's Threnody for the Victims of Hiroshima (1960)

James Tenney's work (e.g. Harmonium #5 (1978)) includes several examples of spectral techniques used explicitly to attract attention to musical timbre.

Spectral music focuses on the physical, perceptual, and aesthetic attributes of timbre. Rather than hearing musical structure as sound, spectral composers such as Tristan Murail, Gerard Grisey, and Jonathan Harvey hear sound structures as music. They claim to organize music in accordance with the ways we naturally perceive sounds, and to produce perceptible results along these lines. The ways in which spectral music has been conceived reflect an ambiguity regarding what it wants to communicate: a metaphorical image of the natural, or a direct, literal representation of sonic properties and acoustic phenomena.

Early spectral composers wanted music to be viewed as "a special instance of the general phenomenon of sound", and as "sound evolving in time" (Fineberg, 2000). More radically than Cage's introduction of "sounds" into music, this insistence that music is sound opens up the possibility for directing attention to the usually overlooked, within the Western musical tradition, timbral dimension of music.

Other Examples

Some of Stockhausen's and Varése's works follow, in many respects, Cowell's (1930s) idea of devising a complex rhythmic system by superimposing poly-rhythms in the proportions of harmonic spectra, emphasizing timbre over pitch.
Listen to Stockhausen's Gesang der Junglinge (1955/56).
Listen to Varése's Poéme èlectronique (1958)

In one 'spectral' technique, composers analyze a portion of a signal they believe contains crucial information about a given tone's sonic meaning, and use the analysis results as the basis for newly-composed pieces. Spectral analysis, a pre-compositional process, should not, however, be confused with the audible, perceivable results.
Listening to spectral music has been likened to participating in a music perception experiment, aimed at discovering whether and how a given acoustical principle (e.g. the relations among spectral components) will function audibly under new conditions (e.g. in musical pieces created based on such relations).
Listen to S. Reich's Come Out (1966)
Listen to S. Reich's Different Trains (1988): America - Before the war

Grisey's Partiels (1975) has been described as an exploration of the sound of the trombone, where the results of a computer spectral analysis of a trombone tone are orchestrated for different instruments. Perceptually, there is nothing from the trombone preserved in this composition. In addition, the spectra from a variety of other instrumental tones could have served the same purpose. Setting aside the fact that listeners seem intuitively interested in the idea that they are hearing "the sound of a trombone", the original trombone is employed simply as a source of inspiration for compositional material, and the guiding metaphor of the piece seems to be the aesthetic/sensual appreciation of sound quality.
Click on the titles below to listen to two other compositions by Grisey.
In Vortex temporum (1996), a few piano keys are re-tuned to produce the frequencies common to all the spectra employed in the piece.
Taléa (1986) is a little freer, exemplifying Grisey's shift of focus from intellectualized concerns of form to perceptual concerns of timbre.
_Vortex temporum
      1. I
      2. Interludes I & II
      3. III & Interlude 3
_Taléa

Murail's L'esprit des Dunes re-presents sampled sounds as more or less transformed images of themselves, manipulating degree of resemblance to their original forms. When processing his samples, Murail preserved the time variant aspects of the spectra in an effort to keep "some of the internal life of the sound" (Smith, 2000).

Several popular avant-guard composers have experimented with timbre-centered music, producing interesting results.
Listen to Eno's Distant Hill
 

For extensive research on Spectral Music see issues 19(2) and 19(3) from 2000 and 22(1/2) from 2003 of the journal Contemporary Music Review, available online through the Columbia College Library (IIMP database - you will need your Oasis ID and password for off-campus access to the materials).

 


 

Key to the "signals with no attack" listening example:  The three instruments are piano, clarinet, and French horn, in this order.
Key to the "clicks" melody: "Mary had a little lamb"


  

Columbia College, Chicago - Audio Arts & Acoustics