• Why are electronic texts suspect? • Can you tear out a page on a screen? • How does chopping up sentences make them coherent? • When do actions speak louder than words? • How can we use questions to map out knowledge needs? . . . We begin this chapter by looking at what is to be gained from understanding the relationship between written and spoken language. The consequences of putting words on the screen are explored, in terms of changes in the meaning of terms, pronunciation, and the effect of spatial proximity on meanings. We then move on to consider aspects of verbal interaction, such as politeness and fluency, and conclude with an overview of users’ knowledge needs identified by analyzing their language. Written texts all have to be related somehow, directly or indirectly, to the world of sound, the natural habitat of language, to yield their meanings. The world of sound as “the natural habitat of language.” Historically, and in an individual’s development, speech comes before writing. For a small child, language is all speech. This is obviously not so for older children and adults, and for some, language is nearly all reading and writing. Still, for most people, language is strongly associated with sound, in a concrete way through hearing and producing language as well as through mental association. In a situation where computers are used, spoken and written language are both present in some way (not necessarily at the same time), not least of all because it is most unusual for someone to use an application without ever speaking about its use! In general, indirect reference from written language to sound through a reader’s prior experience of spoken language or through a special notation is acceptable in many different circumstances, such as in books and newspapers. The question is, What, if anything, do we lose when real sound is missing? Physical demands on the reader (user) are now focused on visual processing.