Anatomically, the human ear is divided into three parts: the external ear, which consists of the pinna or auricule (the part protruding outside the head) and the external auditory canal; the middle ear, a narrow, air-filled cavity behind the tympanum (eardrum) that is intersected by three tiny bones, the malleus, incus, and stapes, or more commonly, the hammer, anvil, and stirrup; and finally the inner ear, a labyrinth of fluid-filled passages where lie the cochlea (the organ of hearing), the vestibular apparatus (what gives us a sense of balance), and the endings of the eighth cranial nerve.  As incoming sound waves are funneled through the auricle and strike the tympanum, they set in motion a rapid chain of events resulting in their transduction into neural impulses and subsequent relay to the primary auditory cortex on the temporal lobe.
In general, the word “ear” as employed in popular discourse refers to the external ear alone.  But aside from its literal definitions, the word is used in countless idioms and colloquialisms. Some people are all ears, and others gladly lend them. Words sometimes fall on deaf ears, or go in one ear and out the other. There is music to our ears, and we play it by ear—occasionally a tin ear. Not coincidentally, many everyday figures of speech involving ears can substitute the name of another sensory organ and remain meaningful. Instead of lending an ear, for instance, we might lend a hand. “Keep an ear out” becomes “keep an eye out,” “turn a deaf ear” becomes “turn a blind eye,” there is both ear-candy and eye-candy, a disagreeable noise is an ear-sore and anything unsightly is an eye-sore. The preponderance of such exchanges says as much about language in flux as it does the interdependence and functional overlap of the five senses (see synaesthesia), but that the majority of such idioms and tropes move fluidly from hearing to seeing indicates the extent to which these two senses are intertwined—bound up in a dialectic between word and image, a linkage implicitly rooted in the melos/opsis/lexis triad. (Here, melos is the providence of the ear and opsis the eye, but lexis, comprising sight and sound, belongs to both organs.)
The full range of human hearing is situated between 20 Hz and 20 KHz, although the most important frequencies for processing speech are the three octaves between 500 Hz and 4000 Hz.  Of all the sensory organs in the human body, the ear is best equipped for temporal perception, as it generally isolates and tracks individual details in its auditory field through time; it also plays a strong ancillary role (to sight and touch) in depth perception, allowing one to make judgments based on the intensity of noises, changes in pitch, echoes, and the time lapse between visual and auditory perception of an event.  Like other sensory organs, ears can be tricked. One famous auditory illusion was performed in 1928 by the American psychologist P.T. Young, with a device called the pseudophone, which consisted of two ear trumpets twisted so as to transmit sounds from one side of the head to the opposite ear; the effect was to disorient the listener and muddle his or her perception of space.  The Doppler Effect may lead one to believe that the pitch of a car horn actually decreases. When two noises are heard in rapid succession, the ear measures the second’s intensity in relation to the first, so that a murmur following a whisper might sound like a shout. 
From a McLuhanist standpoint, the ear is significant in that it seems routinely to submit to a wider variety of extensions than any other part of the body. Certainly, it is the most common sensory site for prostheses, specifically hearing aids. Whereas in the past one would need to resort to passive amplification cones (“ear trumpets”) or bulky, body-worn aids, technological advances have brought about smaller battery-powered and wireless models that can be worn behind the ear, in the ear, or even anchored to the temporal bone. These newer, increasingly compact models are often tailored to look trendy and inconspicuous, sometimes appearing indistinguishable from earrings or wireless headsets.  Cochlear implants provide a far more expensive option, albeit one ideal for the profoundly deaf or severely hard of hearing, differing from conventional hearing aids in that they bypass the damaged parts of the ear altogether and directly stimulate the auditory nerve, and altering the hearing process to such an extent that they require that nearly the entire process be relearned.  Like many other technologically advanced prostheses that demand neurological relearning, Cochlear implants raise questions about the human/post-human divide and cyborg culture.
Just as auricular augmentations can amplify external sensory input, they can also limit or suppress it, cutting one off from his or her immediate surroundings. Earplugs are used in the service of voluntary self-sensory deprivation, sometimes for protection, as when swimming or flying, sometimes when one is in want of sleep or concentration in a noisy environment. Headphones similarly suppress one’s immediate sonic milieu, but rather than effecting silence they connect one to an alternate, localized source of sound, such as a CD player or MP3 player, perhaps channeling the body in a digital direction. No matter the particular numbing or amputative consequences of these devices, their employment as extensions of the body are for McLuhan “attempts to maintain equilibrium.”  Efforts to attain equilibrium by way of the ear are perhaps more complex than with any other sensory organ; no other organ is treated as invasively, subject to the intrusions of these and other extensions, objects, and organs (telephone, radio, Q-tips, one’s “ear finger”) and none utilizes as many forms of extension that amplify external sensory data and bridge long distances as they do those forms that limit it.
On account of its commonly denoted externality, the ear is also a fiercely visual organ. As a result, it is subject to certain culturally specific aesthetic principles, as are its extensions. Far from achieving the beauty of a rare bird, protruding, asymmetric, and “cauliflower” ears are generally considered unattractive in American and Western European countries; in recent years, otoplasty, or aesthetic ear surgery, has become a viable option for those wishing to pull protruding ears closer to the skull or reduce outsized earlobes. Nevertheless, the most common cosmetic embellishments involve jewelry. Historically, ears have been ornamented across cultures, most commonly through piercing. In the United States, ear piercing is increasingly prevalent among both sexes, although traditionally limited to women. The lobe remains the most common site for piercings, although various cartilaginous piercings have grown popular, as well, including tragus, rook, helix, and industrial piercings. Among lobe piercings, studs, hoops, and dangle earrings are the most popular; less conventional are flesh plugs and tunnels, grounded in Asian and African tribal traditions of lobe stretching. 
The innovators of early sound technologies often strove for their apparatuses to simulate bodily form and function, not only sound-producing organs such as the larynx and mouth, but sound-receiving organs as well—such as Alexander Graham Bell’s ear phonautographe, a mechanism constructed from dissected human auditory parts that transcribed the sound waves passing through its horn.  Such externalizations of the inner body served the interests of scientists seeking a better objective understanding of acoustics and how the body processes sensory data, although phonautographes in general remained scientific curiosities until the creation of the phonograph; only then was it realized that the inscriptions the former device recorded on rolls of paper were sound waves, which, with the proper playback mechanism could recreate the corresponding sounds.
The ear occupies a special place in film, sometimes literally, as in David Lynch’s films, where ears and hearing figure centrally in the imagery and narrative, but also because of its perceptual absence in the first thirty years of the medium’s history—at least relative to a sound era that takes the ear’s participation for granted. Sound theorist Michel Chion argues that the audiovisual contract in cinema hides the fact that visual and auditory perception are of vastly different natures. The ear, he notes, “analyzes, processes, and synthesizes faster than the eye.”  A swift visual movement, for instance, such as a hand passing before a face, does not form nearly as distinct a figure in the memory as does an abrupt sound of similar duration. This difference is acknowledged in sound films, particularly in frenetic action sequences where fast movements and rapid editing are punctuated by a rapid succession of sounds that “spot” the images, telling the spectator where to look.  That we rely on our ears to correct our eyes can be exploited, as well, by such cinematic prestidigitation as the automatic sliding doors in Star Wars—a “whoosh” on the soundtrack is enough to convince viewers that they have seen a door open or close when they merely witnessed a straight cut between two shots of the door, open and then closed. 
How cinematic sounds are presented to the ear is significant as well, though was not widely recognized until the advent of Dolby Stereo in the mid 1970s. The incorporation of four audio channels (left, right, center, surround) helped revolutionize the movie-going experience, creating a new kind of richly textured, multidimensional cinematic space that could more fully envelop the viewer, one that altered critical perceptions of the cinema as a primarily visual medium.  The multiplication of discrete audio channels and speakers effectively augments the number of extensions of the spectator’s ear, resulting not only in a more realistic simulation of the events onscreen but also complicating his or her relationship to the image. With the possibility of a wider variety of sounds in an increasingly dynamic relationship to one another, the ear is immersed in a more complex network of sound trajectories to follow, enabling more varied readings of the visual images and more active participation in the construction of meaning. At the same time, spectators find themselves more prone to illusions and sleight-of-hand trickery, demonstrated by the ease with which films and preview advertisements can fool audiences as to the source of the sound of a ringing cell phone. The inevitability of further technological advancements in sound systems in the future, with the potential for a seemingly infinite number of auricular extensions, raises the very real possibility of creating an alternate aural reality indistinguishable from the current one, and perhaps even superior to it.