Psychoacoustics MODULE 2 READINGS Plack, 2005: Chapters 2 & 3 Lecture notes

 t o p i c s Physical attributes of acoustic waves - Part I: Introduction Physical attributes of acoustic waves - Part II         1. Logarithmic scales - Root-mean-square Pressure         2. Sound Waves - Inverse square law          3. Linear Superposition & Interference         4. Fourier analysis - Spectra - Noise spectrum level - Amplitude and Frequency modulation          5. Resonance - Reflection/Reverberation - Diffraction - Refraction - Absorption - The Doppler effect Musical instruments as sound-wave generators and transmitters Signal Processing  &  Digital Signals

Vibration:  A back-and-forth motion around a point of rest or, more generally, a variation of any physical or other property of a system around a reference value of interest.  In general, observations on the vibrations of a source are (often erroneously) expected to apply unchanged to the resulting waves, and observations on waves are expected to have direct origins on source vibrations.  These expectations were first questioned in RayleighÆs work (late 1800s).

Simple Harmonic Motion:  The simplest type of vibration, similar to a free pendular motion.  In a simple harmonic motion, the restoring force (force pulling the vibrating mass towards its point of rest) is proportional to the displacement (move away) from its resting point.
It can be described graphically by a sine curve y(t) = Asin(2πft+φ) (see Figures 1 and 2, below).
See an example of the sine and cosine functions for an angle θ and a graphic definition of both functions.

Complex vibration/motion:  A vibration in which the restoring force is related to displacement in a complex way.  Complex motions are described graphically by complex curves. All such motions can be seen as the result of the combination of multiple simple harmonic motions.  Therefore, the curves (signals) that describe them are essentially the sum of an appropriate number of sinusoidal curves.

 (Acoustical) Signal:  A two-dimensional graphic representation of a vibration/wave, plotting displacement (distance from the point of rest), or a number of other variables such as velocity, pressure, etc. (y axis), over time (x axis).  A signal therefore shows how some variable changes with time.  Signals of waves are also referred to as waveforms. Sinusoidal and complex signals represent sinusoidal and complex vibrations/waves respectively (see Figures 1-3, below). The envelope of a two-dimensional complex signal is a boundary curve that traces the signal's amplitude through time.  It encloses the area outlined by all maxima of the motion represented by the two-dimensional signal and includes points that do not belong to the signal (see to the right: signal in blue, envelope in red). Periodic vibration/wave: A vibration/wave that repeats itself at regular time intervals.
Periodicity and regularity are attributes with specific perceptual significance.  In general, regularity encourages prediction.  In terms of sound, periodic waves give rise to well-defined pitch sensations.

Frequency (f): Number of repetitions per unit time: number of cycles per second (frequency is represented on the x axis of a spectrum - see the section on spectra for details). It is measured in Hertz (Hz): 1 Hz = 1 cycle per second. Frequency is directly related to period T: f=1/T.
This animation illustrates the relationship between rotation at a fixed angular velocity (i.e. fixed angles on a cycle per unit time) and simple harmonic (i.e. sinusoidal) motion.

Period (T): The time it takes to complete a single full vibration/wave or cycle (period is represented on the x axis of a signal). It is measured in seconds per cycle; T=1/f).
Note that time (t) and period (T) do not describe the same thing.

Amplitude (A): Maximum displacement (velocity, pressure, etc.) from the point of rest (amplitude is represented on the y axis of a spectrum and as a single point on the y (e.g. displacement) axis of a signal).

Intensity (I):
A measure of the amount of energy in a vibrational system. It is proportional to A2 and f 2 and is measured in W/m2. For waves in gases & liquids, Intensity is also proportional to P2.

Phase: Position on the vibration cycle at a given time. The phase of a wave is different for adjacent points in space because the vibration energy propagating in the form of a wave reaches these points at different times. See an example.

Logarithmic scale for Power, Intensity, and Pressure - Prms

Although the absolute units for Power, Intensity, and Pressure are watts, watts/m2, and Pacals (N/m2) respectively [all linear measures], these quantities are usually measured on a 10-base logarithmic (log) scale.

Short review of log math operations [see this Salford University page as well as Johnston, 1989 (Appendix 2) for more math examples]

Definition: The x-base log of a number y is the power to which we must raise x in order to get y.
So, when we say that the x-base log of a number y is equal to z we mean that, in order to get y we must raise x to the power of z:
logxy = z  <=>  y = xz .  [by definition, logx1 = 0]
For example,  log101000 = 3  because 1000 = 103.

Addition in log math is equivalent to multiplication in linear math. For example::
log101000 + log10100 = 3 + 2 = 5 = log10100,000 = log10(1000 * 100)

Similarly, subtraction in log math is equivalent to division in linear math. For example::
log101000 - log10100 = 3 - 2 = 1 = log1010 = log10(1000/100)

There are two reasons why we use log rather than linear scales to measure sound Power, Intensity, and Pressure:
a)  As stated in Fechner's Psychophysical Law, perception relates (approximately) logarithmically to physical stimuli, with multiplication in the value of a physical stimulus corresponding to addition in the resulting sensation.  Therefore, log math provide a better model to the relationship between physics (waves) and sensation (sound) than linear math.
b) Log math permit the compression of very large scales into smaller and easier manageable ranges. For example, the linear range between the lowest (barely perceptible at 1,000Hz) and highest (harmful to the ear) Intensities we can hear is |10-12 - 1 W/m2|,  amounting to 1,000,000,000,000 W/m2.
In contrast the same range in log math is |log1010-12 - log101| = |-12-0| = 12 Bells (after A.G. Bell) or 120 decibels (dB) (deci-: Latin for one tenth).

Since we are ultimately interested in the sensation of sound rather than simply in physical waves, we define sound Power, Intensity, and Pressure levels in dB as:
SWL =  10* log W/Wref  -  SIL = 10*log I/Iref  -  SPL = 20*log P/Pref  respectively
where Wref = 10-12 W,  Iref = 10-12 W/m2,  and  Pref = 2*10-5 Pa (N/m2) are the smallest perceivable values of Power, Intensity, and Pressure (i.e. 0dB sound level)

W = 1 W,  I = 1 W/m2,  and  P = 20 Pa (N/m2) are the largest Power, Intensity, and Pressure values that are safe to listen to (i.e. 120dB sound level)

 Sound Levels and Human Response From the Noise Pollution Clearinghouse (http://www.nonoise.org) Common sounds Noise Level [dB] Effect Rocket launching pad (no ear protection) 180 Irreversible hearing loss Carrier deck jet operation Air raid siren 140 Painfully loud Thunderclap 130 Jet takeoff (200 ft) Auto horn (3 ft) 120 Maximum vocal effort Pile driver Rock concert 110 Extremely loud Garbage truck Firecrackers 100 Very loud Heavy truck (50 ft) City traffic 90 Very annoying Hearing damage (8 Hrs) Alarm clock (2 ft) Hair dryer 80 Annoying Noisy restaurant Freeway traffic Business office 70 Telephone use difficult Air conditioning unit Conversational speech 60 Intrusive Light auto traffic (100 ft) 50 Quiet Living room Bedroom Quiet office 40 Library Soft whisper (15 ft) 30 Very quiet Broadcasting studio 20 10 Just audible 0 Hearing begins

For music-related sound levels see Johnston, 1989 (Appendix 4).  See also an additional table of commonly encountered sound levels.

Root-mean-square Pressuree
The amount of energy in a signal can be measured in terms of
a) 0-to-peak amplitude,
b) peak-to-peak amplitude, and
c) root-mean-square (rms) amplitude (See Figure a, below. For sinusoidal signals, rmsAmplitude = 0.707*peakAmplitude).

Root mean square Pressure or Prms is a measure of the area outlined by the signal (Figure b) and is defined as the square root of the average of the square of the pressure of the sound signal over a given duration (usually one period): Prms = (P2)average0.5
Prms is a more meaningful measure of a signal's overall loudness than 0-to-peak and peak-to-peak pressures, as it is directly related to a signal's Intensity (remember, Intensity is proportional to P2), used to describe a signal's energy content.  Figure (a) 0-to-peak, peak-to-peak, and rms amplitudes of a sine signal Figure (b) rms amplitude is a measure of the area outlined by the signal (highlighted).

Waves

Wave: Transfer of vibration energy across a medium (e.g. air, string, etc.).
Waves originate in but are not equivalent to vibrations.  Waves depend on the propagation-medium properties while vibrations do not.
Sound waves in air are manifested as alternating air-condensations and rarefactions that spread away from the vibrating source in the form of pressure fluctuations.
[Defining waves is a complex task. See this extended definition]

Wavelength (>λ;):
The distance a wave travels during one vibration cycle.  It is a function of frequency f and the speed of sound c:
λ= c/f
(equivalent to  λ= c*T).

Speed of sound: c = (E/ρ)0.5 (where E is Young's modulus in N/m2 and ρ is density in Kg/m3).  So, the stiffer a medium (higher E) the faster the speed of sound, while the denser the medium (higher ρ) the slower the speed of sound.
Speed of sound in air c = 345m/s or 1130ft/sec at 210C or 700F  [C: temperature in Celsius - F: temperature in Fahrenheit]

Unit conversions:
1 foot = 0.3048 meters
1 meter = 3.281 feet
C = (5/9)*(F-32)
F = (9/5)*C+32

The speed of sound in air is related to air temperature (in Celsius) as follows:
c
celcius = 345 + (C-21)*0.6 m/s
That is, the speed of sound increases by 0.6 m/s for each degree of temperature increase (in Celsius).

Speed of sound on a string with tension T and inertia μ (inertia: mass per unit length) : c = (T / μ)0.5.

Simple/sine Wave:: Transfer of a simple harmonic motion across a medium.

Complex Wave: Transfer of complex vibration/motion across a medium.

Transverse waves: waves propagating perpendicularly to the motion of the vibration (e.g. waves on a string, waves on the front and back plates of a violin, waves on a drumhead, etc.).

 Longitudinal waves: waves propagating parallel to the motion of the vibration (e.g. sound waves in air, waves in air columns (e.g. trombone), etc.). (From D. Russell's website at Kettering University, Flint, MI) See these examples of transverse and longitudinal waves
Here are two additional examples (Quicktime movies) of plane and spherical (demo in 2 dimensions) longitudinal waves.

Standing waves: vibration patterns (on strings, air columns, listening environments, etc.) arising when energy propagating in a wave is trapped within two or more, appropriately positioned, reflective boundaries.  Standing waves are characterized by fixed points along the wave propagation medium where the amplitude is maximum (antinodes) and zero (nodes). They are desirable in musical instruments and undesirable in listening environments.
Transverse standing waves 1
Longitudinal standing waves 1

Inverse Square Law

 Assuming that the energy from a vibrating source spreads away from the source in a spherical manner (this is an idealized case, applicable only to low frequencies - see diffraction, below), the Intensity of the resulting wave at a given point in space is inversely proportional to the square of the distance from the source. Since I = W/A and, for a sphere, A = 4πr2,  the intensity ratio at distances r1 and r2 from the center of the sphere will be I1 / I2 = (r2 / r1)2 [can you derive this from the equation I = W/A ?].  For example, doubling the distance from a source results in Intensity reduction by a factor of 4. Finally, since Intensity is proportional to the square of the Pressure, P1 / P2 = r2 / r1 The linear superposition principle states that, at any given time, the total displacement of two superimposed vibrations (waves) is equal to the algebraic (i.e. taking into consideration signs: +, -) sum of the displacements of the original vibrations (waves).  First introduced by Bernoulli (late 1700s), this principle came out of the study of strings, considered as one-dimensional systems.  It states that many sinusoidal vibrations can co-exist on a string independently of one another and that the total effect at any point is the algebraic sum of the individual motions.  To be more precise, however, if the compound vibration at any point of a string is the algebraic sum of the individual vibrations, it is the sinusoidal waves (and not the vibrations) that can co-exist on a string independently of one-another.

The interference principle (Figures 5-7) is an extension of the superposition principle.  It states that, when two or more vibrations (waves) interact, their combined amplitude may be larger (constructive interference) or smaller (destructive interference) than the amplitude of the individual vibrations (waves), depending on their phase relationship.
When combining two or more vibrations (waves) with different frequencies, their periodically changing phase relationship results in periodic alterations between constructive and destructive interference, giving rise to the phenomenon of
Amplitude fluctuation: amplitude that fluctuates periodically over time at a rate equal to the frequency difference of the original vibrations (waves).
If two sines with frequencies f1 and f2 are added together, the rate of amplitude fluctuation will be equal to |f1-f2|.
Click here for an interactive example of interference and amplitude fluctuation (beats)

As an extreme example of destructive interference, if we combine two vibrations (waves) with the same frequency and amplitude but opposite phase, the two vibrations (waves) will cancel one another, resulting in no vibration (wave) at all. Complete (or almost complete) destructive interference is often used in noise and feedback reduction systems. According to a short story in the December 10, 2004 issue of New York Times, two 2005 Honda car models were the first to carry active noise reduction systems, based on this principle. Figure 5: Sines A and B have the same frequency, amplitude, and phase values. Their addition results in total constructive interference: the combined signal has amplitude equal to the sum of the initial amplitudes. Figure 6: Sines A and B have the same frequency and amplitude values but opposite phases. Their addition results in total destructive interference: the combined signal has amplitude 0, which is again the sum of the two amplitudes. Since the sines are out of phase, at every moment in time, their displacements always have equal values but opposite signs. Figure 7: Sines A and B have the same amplitude and starting phase but different frequencies. Their phase relationship constantly and gradually shifts with time: at one moment they are in phase and at another moment they are out of phase. Their addition results in a constant, gradual shift between constructive and destructive interference, giving a signal (A+B) whose amplitude changes with time. Depending on the rate of amplitude fluctuation, such fluctuations can be perceived as beating, roughness, or combination tones (to be described later). Dichotic beats, a beating-like sensation arising when two different tones with slightly different frequencies are presented on in each ear through headphones, have different, non-interference-related origins, to be discussed during our examination of sound-source localization strategies, later in the semester.

 J. Fourier's (1768-1830) mathematical law states that all complex signals can be reduced to the sum of a set of sine signals with appropriate frequencies and amplitudes, referred to as the Fourier/sine components or partials of a complex signal and making up the complex signalÆs spectrum.  Analysis of a complex signal into its sine components is called Fourier analysis, while the reverse process, constructing a complex signal out of a set of sinusoids, is called Fourier synthesis.  For periodic signals (also called harmonic signals, such as the signals corresponding to most musical sounds), the lowest frequency component is called the fundamental and all other components (also called harmonics) have frequencies that are integer multiples of the frequency of the fundamental.  That is, if the fundamental component has frequency 1ā, then the components above the fundamental have frequencies 2ā, 3ā, 4ā, and so forth.  All other types of signals are non-periodic and are called inharmonic. Periodic (harmonic) complex signals have a rather definite pitch that matches in frequency the frequency of the fundamental (whether or not the fundamental component is actually present in the signalÆs spectrum).  Non-periodic (inharmonic) signals have a rather indefinite pitch or no pitch at all, depending on the degree of inharmonicity (i.e. on how far away its frequency components are from an integer multiple relationship) and on the absolute and relative duration of their spectral components.  More during the discussions on pitch, loudness, and timbre in the following weeks.

Experiment with this Fourier synthesis java applet.
You may also download the Fourier-based Synthesizer shown in class (Custom application, Windows only. To install, download the SynthSetup.zip zipped file, un-package it, and click on the SETUP.EXE file).

Spectrum: A two-dimensional graphic representation indicating the frequency (x axis) and amplitude (y axis) values of a signal's sinusoidal components.

The signal in Figure 1, with its regular, sinusoidal peaks and valleys, represents a simple/sine wave, corresponding in frequency to the note A4. The peaks of the signal appear at regular intervals of 2.27 milliseconds, or 1/440th of a second. So the Period of the signal is 1/440th of a second. This means that the signal (and the wave it represents) repeats itself 440 times per second. It has a Frequency (rate of repetition) of 440 Hertz (cycles per second). The Amplitude of the signal is represented as the distance between the top (or bottom) peak and the central horizontal line (representing the point of rest). Figure 1: Signal of the first 2 periods of a vibration with f: 440Hz and A: 1

The signal in Figure 2, below, represents a pure wave, corresponding in frequency to the note A3. This signal repeats half as fast as the previous one, with a frequency of 220 Hz (cycles per second).  Although the vibration it represents would travel through air to our ear about as quickly as the one represented in Figure 1, it would cause the air to vibrate at a rate half as fast. Figure 2: Signal of the first period of a  vibration with f: 220Hz and A: 1, perceived as a pure tone with pitch an octave lower than the tone represented in Figure 1 (more on tone and pitch sensations in the following weeks)   Figure 3: Example of synthesis of a complex signal out of two sine components with frequencies f1 = 250Hz and f2 = 500Hz. The pitch of the complex signal matches in frequency the fundamental component (250Hz). The last graph is the spectrum of the complex signal. Figure 4: (a): a musical tone with pitch A2 as it is represented in notation. (b): The frequencies and (approximate) notational representation of the first six components of the tone in (a) (after Campbell and Greated, 1987).

Ideal signal-forms (also called 'waveforms' - see the figure, below)

(a) An ideal sawtooth signal is formed by summing an infinite number of harmonic components, with amplitudes = A/n (A: amplitude of the first component;  n: component number) and with the even-numbered components shifted 1800 in phase with respect to the odd-numbered components.

(b) An ideal triangular signal is formed by summing an infinite number of only odd-numbered harmonic components, with amplitudes = A/n2 (A: amplitude of the first component;  n: component number) and with successive odd components shifted 1800 in phase with respect to one another.

(c) An ideal square signal is formed by summing an infinite number of only odd-numbered harmonic components, in phase with one another and with amplitudes A/n (A: amplitude of the first component;  n: component number). Fourier analysis drawbacks

Spectra arising from Fourier analysis are, unfortunately, never as clean and precise as the ones in the above figures.
There is a time-frequency trade-off in that,
a) the more precisely we want to know how a spectrum changes with time, the less precisely the frequency values per spectrum will be represented and
b) the more precisely we want to know the frequencies in a spectrum, the more these frequencies will be averaged over time and the less precisely the changes of a sound over time will be captured.
This trade-off is a direct consequence of Heisenberg's uncertainty principle, stating that the more precisely we determine the position of a particle the less precisely we may know its momentum (mass*velocity).  In terms of the frequency distribution indicated through spectral analysis, the time-frequency trade-off means that the shorter the signal portions analyzed the larger the analysis frequency bandwidth and the longer the signal portions analyzed the smaller the analysis frequency bandwidth.

In addition, there is a series of assumptions accompanying Fourier analysis of signals [e.g. a) the signal analyzed is of infinite duration and b) all the energy within an analysis band lays at the band's high-frequency end], whose often violation results in spectral "smearing" (i.e. "true" spectral components being surrounded by additional ones that are not present in the signal analyzed), representing analysis artifacts rather than spectral aspects of the analyzed signal (see below). Actual spectrum of a signal with two components at the indicated frequencies and amplitudes. Spectrum of a signal with two components (as indicated to the left) resulting from Fourier analysis of 17ms-long portions of the signal. The temporal resolution of the analysis is 0.017secs or 17ms and the frequency bandwidth is 1 / 0.017 =~ 59Hz.

Noise spectrum level
Noise spectrum level is defined as the Intensity (in dB) per 1Hz noise bandwidth. See your textbook (Plack, 2005: 26-28).
More on sound levels in the following weeks..

Amplitude and Frequency modulation
See your textbook (Plack, 2005: 28-31).  Signal and spectral representations of the amplitude modulation parameters (modulation frequency and depth). Note the systematic spectral implications of the amplitude modulation frequency and depth and the nonlinear relationship between amplitude modulation depth and degree of signal amplitude fluctuation. More specifically, sinusoidal amplitude modulation (modulation rate fmod (in Hz) and modulation depth m (in %)) of a sine (frequency f and amplitude A) means that the resulting spectrum will have three components: the original sine and two sidebands, a low frequency sideband and a high frequency sideband.  The frequency of the sidebands is determined by the modulation rate: flow  = f - fmod  &  fhigh  = f + fmod. The amplitude of the sidebands is determined by the modulation depth: Alow  = Ahigh = 1/2mA.

Resonance

Phenomenon occurring when the vibration frequency of one system matches the natural frequency of a second system ('natural' meaning the frequency with which a system would vibrate if energy was supplied to it and then it was left on its own; natural frequencies depend on the size, shape, material, and construction of resonators). When resonance occurs, maximum amount of energy is transferred from the first system to the second. Most musical instruments incorporate resonators with shapes, materials, and construction that result in a range of 'natural' frequencies so that they may respond to multiple frequency components of a single note and to more than one note (musical resonators are, in general, broadly tuned).
See the resonance video presented in class.

Helmholtz resonators

 Helmholtz resonators, named after their inventor, are spherical or cylindrical containers with a short, narrow neck, open at one end.  The air in the container acts like a spring and the air in the neck acts like a mass, with the volume of air in and near the open hole vibrating because of the 'springiness' of the air inside.  Consider a 'lump' of air at the neck of the bottle (shaded in the middle diagrams).  An air jet can force this lump of air a little way down the neck, compressing the air inside.  This pressure drives the 'lump' of air out but, when it gets to its original position, its momentum moves it a small distance outside the neck.  This rarifies the air inside the body, which then sucks the 'lump' of air back in.  It can thus vibrate like a mass on a spring (diagram at right).  A jet of air from your lips, for example, is capable of deflecting alternately into the bottle and outside, and that provides the power to keep the oscillation going.
The frequency of the vibration is related to the speed of sound in air c, the length l and area A of the container neck, and the volume V of the container (see the figure above).  As indicated by the equation, above, longer necks ( l ) and larger containers (V) produce lower frequencies while wider necks (A) produce higher frequencies.

An example of the application of Helmholtz resonance is the inclusion of a bass reflex enclosure to a loudspeaker.  The tuned port or bass reflex enclosure improves the bass (low) frequency efficiency and range of a loudspeaker by carefully adjusting the shape and position of a hole or tube connecting the inside of the speaker box with the outside.  The air volume of the box thus acts as the air in the body of a Helmholtz resonator, with a resonant frequency that is determined by the geometry of the hole/tube and enclosure, deliberately chosen to smoothly extend the frequency range of the speaker system below its original low cutoff frequency.
In addition, the existence of the port greatly reduces the air pressure variation between the inside and the outside of the speaker box.

When a wave reaches a boundary separating two wave propagation media (1 and 2), some energy will be reflected (bounce back), some will be diffracted (bend around), and some will pass through (transmitted) from medium 1 to medium 2.
The energy that will pass through may be refracted (bend) inside medium 2, absorbed by medium 2, and/or simply transmitted into medium 2.

Reflection

We borrow the basic law of reflection from optics: The angle of incidence will be equal to the angle of reflection (see the figure on refraction, further below).  The law of reflection can be derived from Fermat's principle (: waves will follow the path of least time). The balance between reflected and passing energy at a boundary is determined by the impedance (resistance) relationship either side of the boundary (the larger the impedance mismatch the more the reflection) and also depends on the angle of incidence.  The larger the absolute value of the incidence angle (i.e. the angle between the direction of the wave motion and the perpendicular to the boundary the wave encounters - this angle ranges from -900 to +900) the more likely it is to have total reflection.
Practically all musical instruments rely on reflection to build up standing waves and produce sound.  They achieve this by involving structures with appropriately placed boundaries and impedance mismatches (see Johnston, 1989: Appendix 5).

Briefly, if
I0
: Incident Intensity,  Ir: Reflected intensity,  It: Transmitted intensity,
z1
; impedance of the medium in which the sound wave originates
z2
: impedance of the medium to which the sound wave goes
then:
Ir / I0 = [(z1 / z2) -1 / (z1 / z2) + 1]2   =   [(z2-z1) / (z2+z1)]2    =  and
It / I0 = 4z2z1 / (z2+z1)2

So
If z1 = z2 then there is no reflection (there is full transmission).
If z2 >> z1 or z2 << z1, then there is almost total reflection (almost zero transmission).
Can you derive the two previous statements from the above equations?

If z2 < z1, then the reflected wave is in phase with the incident wave.  If z2 > z1, then the reflected wave is 1800 out of phase with the incident wave.

 Examples of reflection - dependence on impedance (from the University of Saskatchewan, Department of Engineering Physics) Reflection and transmission of an incident pulse wave of unit amplitude, withZ1/Z2 = 0.5 (i.e. the second medium is heavier). The reflected wave is negative and its peak is -1/3. The transmitted wave, which propagates slower, has an amplitude of 1 - 1/3 = +2/3. Total reflection at a fixed end (Z2 = infinity) Z1/Z2 = 2, (i.e. the second medium is lighter). In this case, there is no sign reversal in the reflected wave. The transmitted wave has larger amplitude than the incident wave but this does not mean amplification in wave energy. Total reflection at a free end (Z2 = 0)

Successive, partial reflections of sound energy from the surfaces in an enclosure, such as an auditorium, are perceived as reverberation.  Reverberation is a desirable property of auditoriums to the extent that it helps overcome the inverse square law drop-off of sound intensity in the enclosure.
Reverberation time is defined as the time it takes for the reflected sound to loose 60dB of its original level.  More on reverberation levels in the following weeks.

Diffraction

Diffraction describes the ability of sound waves to bend around obstacles and through openings whose smallest dimension is smaller than the wavelength(s) of the sound waves.  Diffraction is very important in that it largely determines the radiation patterns of sound sources (i.e. the ways the sound spreads out).  In general, low frequencies diffract more than high frequencies, as it is more likely for obstacles/openings commonly encountered in everyday life to be smaller than the wavelength (λ) of low, rather than high frequencies.
For example, assuming c = 345m/s (200C)
a) F = 66Hz (e.g. fundamental frequency of ~ a C2 note played on a bass guitar) => T = 1/66 = 0.0152s => λ = c*T = 345*0.0152 = 5.24m.
a) F = 528Hz (e.g. fundamental frequency of ~ a C5 note played on a flute) => T = 1/528 = 0.0019s => λ = c*T = 345*0.0019 = 0.66m.
As most sounds have spectra with more than one frequency component, portions of the sound's spectrum will be diffracted and portions will not, resulting in a perceived sound quality that depends on the position of a listener relative to the "sound-source - obstacle/opening" system in question.  More during the discussion on timbre.

 Illustrations of diffraction - dependence on frequency (from Hyperphysics, Georgia State University)  Refraction describes a change in the propagation direction of sound waves as they move across two media where sound waves travel at different longitudinal velocities.  According to Snell's law, sinθ1 / sinθ2 = v1 / v2 (see figure to the right - in this example, v1 > v2). Similarly to the law of reflection, Snell's law can also be derived from Fermat's principle. Since the speed of sound in air depends on temperature, sound waves may be refracted in open air as they cross layers of air that are at different temperatures.  Wind direction and speed also influences the propagation velocity of sound waves and, consequently, refraction angle.  Therefore, orientation of an open air theater stage relative to prevailing winds in the area, as well as air temperature considerations are crucial to efficient sound transmission to the audience and minimization of sound disturbance to the surrounding areas. Absorption describes the loss of energy as a wave propagates through a medium and/or strikes a boundary (e.g. sound absorption through friction as a sound wave propagates in air, through trapping as a sound wave strikes a pliable and/or porous boundary, thermal loss of energy, etc.).  The degree to which a medium absorbs sound is expressed by its coefficient of absorption, which is related to the medium's material and is usually frequency dependent.  Absorption results in the damping of a vibration.

The Doppler effect

See an animated example and make sure you visit Dr. D. Russell's Web page (Kettering University), for details on the Doppler effect and an outline of sonic booms.

 Illustrations of the Doppler effect - dependence on source speed (from the University of Saskatchewan, Department of Engineering Physics) Source velocity = 1/2 * Sound velocity Source velocity = 2 * Sound velocity

.

_ Review the list of interactive animations used in the course so far.

_ Also, visit and bookmark the interactive online resource on the physics and psychophysics of sound and hearing from Georgia State University and the Sound, Waves, and Music chapter of the Online Physics Classroom.

When musical instruments are examined in terms of the sound they produce they can be better understood as driver ¢ generator ¢ coupler ¢ resonator - atmosphere coupling systems.

The heart of any musical instrument is its sound generator.  This is the part that, when excited by a driving force, will be the first to vibrate, initiating the sound waves that travel through the air and eventually reach our ears or a microphone.
Some examples of generators include: strings (violin, piano,...), reeds (clarinet, sax,...), air (flutes,...), lips (trumpet,...), skins/membranes (various drums,...), metal plates (cymbals, gongs,ģ), vocal folds (voice), etc.
The vibration characteristics of a generator outline the frequency and spectral ranges of the sounds produced.

The driver that excites (sets into vibration) a generator can apply a driving force in various ways (e.g. bowing, plucking, striking, blowing, etc.) and at various locations on the generator [e.g. in the middle (i.e. away from any support), towards the edges (i.e. near a support), etc.].  In general, the closer a driving force is applied to the middle of a generator (such as a string, a membrane, or a bar) the richer in low frequency components the resulting spectrum will be.

Energy can also be supplied at various rates, with the duration of the contact between driving force and generator varying in length.  For example, one may pluck a string with the fingertips (slower rate / longer contact), or a plectrum (faster rate / shorter contact); or one may strike a drum with a felt-tip (slower rate / longer contact) or a wooden-tip (faster rate / shorter contact) mallet, etc.  In general, the faster the rate and the shorter the contact, the more complex and important (perceptually) the attack portion of a signal's envelope to the resulting sound and the richer in high frequency components the resulting spectrum.

A generator is always coupled to (connected to or placed near), and therefore amplified, sustained, and ōshapedö (in terms of spectrum) by some sort of resonating system
Examples of resonating systems include: soundboard (piano,...), air cavity & body (lutes, violins,...), air column (horns,...), vocal tract (voice), mouth cavity (jewÆs-harp,...), etc.  Size, shape, material, and construction of a resonator are all parameters that determine its resonance characteristics and its influence on the sound produced.  Resonating systems incorporated to musical instruments are preferably broadly (rather than sharply) tuned.  In other words, they are designed in such a way that they can resonate at, and therefore amplify, a wide range of frequencies, usually matching the frequency range of the instrument they are attached to.

A coupler provides the link between generator and resonator (e.g. various types of bridges etc.), and may facilitate their impedance matching/mismatching.  In the case of coupling through proximity, the coupler is air.  There is always some degree of energy feedback between generators and resonators (that depends on coupling and may be different for different notes on the same instrument), meaning that generators and resonators may alternate roles periodically.  The degree of feedback and nature of its periodicity determine, to an extend, the balance between an instrumentÆs radiation efficiency (see below) and decay time (i.e. how long it takes for the vibrations of the generator/resonator to die out after energy supply has stopped.)  The difference, for example, between the decay times of a hammer dulcimer and a classical guitar string reflects, in part, the different degree and periodicity of feedback between their respective generators and resonators, which depends on coupling.

An instrumentÆs coupling to the atmosphere (the way sound waves are transferred from the instrument to the surrounding air; e.g. the bell on a horn) also influences the spectral characteristics of the sound produced, as well as its radiation pattern (see below.)

Radiation efficiency: Term describing the ratio of acoustic pressure just outside the end of an instrument to that just inside.  Radiation efficiency varies according to instrument construction and the note being produced and is ultimately a function of the impedance difference between the inside and the outside of an instrument.  In general high frequencies radiate more efficiently than low frequencies.
Radiation pattern: Term describing the way energy radiated from a sound source spreads away from it.  Generally speaking, high frequencies are more directional (they follow a narrower path) than low frequencies, which spread more or less spherically (remember the discussions on the inverse-square law and on diffraction).

The radiation efficiency and radiation pattern of a musical instrument play a decisive role in the relationship between its perceived timbre and the position of a listener or microphone relative to the instrument.

Signal Processing  &  Digital Signals

See your textbook (Plack, 2005, Chapter 3: 46-59) for a discussion on filters, impulse responses, linear & non-linear systems, distortion, and signal representation and manipulation in the digital domain.

Columbia College, Chicago - Audio Arts & Acoustics