Humans hear sounds from 20 Hz to 20000 Hz


Test your lower and upper limit with this video:




The ear is a frequency analyser: frequencies will land onto different points of a long platform (cochlea) depending on its mechanical properties



What happens when we hit a drum?

Its surface is set in motion and starts to vibrate, meaning moving up and down very quickly. The molecules of the surface collide with the air molecules and push them a little bit “forward”. These push their neighbors and then come back to their original position. The air molecules that have been pushed will push their neighbors and then return to their inital position and so on.


The air molecules leave their original position and return to it; they are vibrating or oscillating. This creates compressions and rarefactions in the air, which correspond to waves.

(Note that a sound of 20 Hz corresponds to 20 compressions/rarefactions per second).


A illustrative animation is found at this link. Quote: “Pick a single particle and watch its motion. The wave is seen as the motion of the compressed region (ie, it is a pressure wave), which moves from left to right.”



How do we hear the sound of a drum?

Interestingly, our ear has a drum of its own, the "eardrum" or "tympanic membrane" (the greek word for drum is “tympanon”). It is shown in Figure 1 below in dark green.



Figure 1: A diagram of the anatomy of the human ear. (By Lars Chittka; Axel Brockmann - Perception Space—The Final Frontier, A PLoS Biology Vol. 3, No. 4, e137 doi:10.1371/journal.pbio.0030137 (Fig. 1A/Large version), vectorised by Inductiveload, CC BY 2.5,



The vibrating air molecules will push the surface of the ear drum and set it in motion.


You hit a drum with a drumstick. Interestingly, our eardrum has a drumstick attached to it. Instead of the stick hitting the drum, the drum hits the stick! Actually, there is a series of three sticks, one transmitting motion to the other: the malleus (hammer), the incus (anvil), and the stapes (stirrup). They are shown in Figure 1 and 2.




Figure 2: Tympanic membrane, malleus, incus and stapes (By BruceBlaus. When using this image in external sources it can be cited staff (2014). "Medical gallery of Blausen Medical 2014". WikiJournal of Medicine 1 (2). DOI:10.15347/wjm/2014.010. ISSN 2002-4436. - Own work, CC BY 3.0,



Adjacent to the stapes is a snail-like chamber called “cochlea” after the greek word for snail (in purple in Figure 1). It consists of three long tubes filled with fluid that can be seen as cavities n the transverse section presented in Figure 3: the tympanic duct (scala tympani), the vestibular duct (scala vestibuli) and the cochlear duct (scala media). The third stick mentioned, the stapes, has a footplate that seals the oval window of the vestibular duct as shown in Figure 1. When sound arrives at the stapes it will push onto the liquid of that duct.



Figure 3: Cochlea in transverse section on the right and in 3D at inset on the left. (By OpenStax College [CC BY 3.0 (], via Wikimedia Commons - modified by removal of indications)




A very important structure of the cohlea is the basilar membrane, a stiff structural element that separates the cochlear duct from the tympanic duct. If we imagine the cochlea unwound as shown in Figure 4, the basilar mebrane could be considered to resemble to a plane runway.



File:1408 Frequency Coding in The Cochlea.jpg


Figure 4: Representation of cochlea in "unwound" form






Figure 5: Sound wave propagation from middle ear to cochlea



As the sound travels down the basilar membrane it makes it vibrate. Its vibration will depend on its material properties or mechanical properties (stiffness and width at a given point) as well as on a specific type of cells, termed outer hair cells that modify the response that occurs on its mechanical properties alone.


Quoting Wikipedia "As shown in experiments by Nobel Prize laureate Georg von Békésy, high frequencies lead to maximum vibrations at the basal end of the cochlear coil, where the membrane is narrow and stiff, and low frequencies lead to maximum vibrations at the apical end of the cochlear coil, where the membrane is wider and more compliant. This "place–frequency map" can be described quantitatively by the Greenwood function and its variants."


Additional information from

"One feature of such a system is that regardless of where energy is supplied to it, movement always begins at the stiff end (i.e., the base), and then propagates to the more flexible end (i.e., the apex). Georg von Békésy, working at Harvard University, showed that a membrane that varies systematically in its width and flexibility vibrates maximally at different positions as a function of the stimulus frequency (Figure 13.5). Using tubular models and human cochleas (...) he found that an acoustical stimulus initiates a traveling wave of the same frequency in the , which propagates from the base toward the apex of the basilar membrane, growing in amplitude and slowing in velocity until a point of maximum displacement is reached. This point of maximal displacement is determined by the sound frequency. The points responding to high frequencies are at the base of the basilar membrane, and the points responding to low frequencies are at the apex, giving rise to a topographical mapping of frequency (that is, to )."


The above are demonstrated in Figures 4, 5 and 6 in an animation from which is presented in Figure 7.


(Interesting discussion at this link.)


Figure 13.5. Traveling waves along the cochlea.

Figure 6: Traveling waves along the cochlea

(Figure 13.5 from





Figure 7: From Lower frequency sound (2 KHz) falls on the apex of the cochlea while higher frequency sound (6 KHz) on the base.




Above the basilar membrane, there is an epithelial strip, termed the organ of Corti which includes the hair cells, the sensory receptors of the auditory and vestibular system. It is represented in Figure 3 and link The deflection of their hair-like protrusions leads to the opening of mechano-sensitive ion channels and to the generation of action potentials that are transmitted to the brain.





The ear canal acts like a resonator


"The ear canal acts like a resonator, and because it is closed at one end, it acts like a quarter-wave-length resonator whose wavelength is four times the length of the canal (average canal length is 25 mm in adults) or approximatively 100 mm. This equates to a sound of approximatively 3.3 kHz; therefore, the ear canal amplifies sound around 3.3 kHz in adults and higher frequencies (according to ear length canal lenght) in infants and children. The outer ear increases sound level by 10 to 15dB in a frequency range from 1.5 to 7 kHz, and this resonance includes the concha bowl of the pinna which has a resonant frequency around 5 kHz."



The vestibular system: the otolith organs (saccule and utricle) and the three semicircular canals


The vestibular system is the sensory system that provides the sense of balance as well as information on the position of the body in space (spatial orientation). It consists of the semicircular canals and the otolith organs.


The Semicircular Canals.

The semicircular canals are three semicircular interconnected tubes which sense rotations e.g. angular accelerations of the head.


The Otolith Organs: The Utricle and Sacculus”.

The otolith organs, which are the utricle and the sacculus, sense displacements and linear accelerations of the head. They contain a sensory epithelium, the macula, which includes hair cells. Above the hair cells, there is a gelatinous layer which is overlaid by a fibrous structure, the otolithic membrane, with embedded crystals or of calcium carbonate called otoconia (Figures 14.3 and 14.4A in above link). These crystals give the otolith organs their name as “otolith” is Greek for “ear stone”. During movement, these crystals will we responsible for the generation of potentials in the hair cells as described at “The Otolith Organs: The Utricle and Sacculus”.


Please also refer to Figure 14.5 for the linear acceleration of the head.


Clinical assessment ( “It is possible to assess saccular function through use of the cervical vestibular evoked myogenic potential (cVEMP).


Image (Reference):






Otoacoustic emissions: a standard auditory test 


Human sonar: The ear not only receives sound but also emits faint sounds at frequencies that are characteristic for each person (biometric feature)


Broadly speaking, there are two types of otoacoustic emissions: spontaneous otoacoustic emissions (SOAEs), which can occur without external stimulation, and evoked otoacoustic emissions (EOAEs), which require an evoking stimulus.


Clinical importance

Otoacoustic emissions are clinically important because they are the basis of a simple, non-invasive test for hearing defects in newborn babies and in children who are too young to cooperate in conventional hearing tests.


Biometric importance

In 2009, Stephen Beeby of The University of Southampton led research into utilizing otoacoustic emissions for biometric identification. Devices equipped with a microphone could detect these subsonic emissions and potentially identify an individual, thereby providing access to the device, without the need of a traditional password [15] It is speculated, however, that colds, medication, trimming one's ear hair, or recording and playing back a signal to the microphone could subvert the identification process.[16].


(End of excerpts).

pdf link to S. Beeby paper:


Suggested reference

"Ear noise can be used as identification"





"Animal Sonar: Processes and Performance"

Link (with excerpt related to human)