Children acquire knowledge and develop communication skills

Directional Effects on Infants and Young Children in Real Life: Implications for Amplification Teresa Y. C. Ching National Acoustic Laboratories, Chat...
Author: Peter Campbell
0 downloads 0 Views 1MB Size
Directional Effects on Infants and Young Children in Real Life: Implications for Amplification Teresa Y. C. Ching National Acoustic Laboratories, Chatswood, New South Wales, Australia, and The HEARing Cooperative Research Centre, Victoria, Australia

Anna O’Brien National Acoustic Laboratories

Harvey Dillon National Acoustic Laboratories and The HEARing Cooperative Research Centre

Josef Chalupper Siemens Audiologische Technik GmbH (Siemens Hearing Instruments), Erlangen, Germany

Lisa Hartley David Hartley George Raicevich

Purpose: This study examined the head orientation of young children in naturalistic settings and the acoustics of their everyday environments for quantifying the potential effects of directionality. Method: Twenty-seven children (11 with normal hearing, 16 with impaired hearing) between 11 and 78 months of age were video recorded in naturalistic settings for analyses of head orientation. Reports on daily activities were obtained from caregivers. The effect of directionality in different environments was quantified by measuring the Speech Transmission Index (STI; H. J. M. Steeneken & T. Houtgast, 1980). Results: Averaged across 4 scenarios, children looked in the direction of a talker for 40% of the time when speech was present. Head orientation was not affected by age or hearing status. The STI measurements revealed a directional advantage of 3 dB when a child looked at a talker but a deficit of 2.8 dB when the talker was sideways or behind the child. The overall directional effect in real life was between –0.4 and 0.2 dB. Conclusions: The findings suggest that directional microphones in personal hearing devices for young children are not detrimental and have much potential for benefits in real life. The benefits may be enhanced by fitting directionality early and by counseling caregivers on ways to maximize benefits in everyday situations. KEY WORDS: directional microphones, infants, young children, hearing aids, pediatric amplification, naturalistic settings

National Acoustic Laboratories

Jens Hain Siemens Audiologische Technik GmbH (Siemens Hearing Instruments)

C

hildren acquire knowledge and develop communication skills through their experiences in everyday environments (Akhtar, 2005). Extracting speech information in these environments is challenging not only for children with impaired hearing but also for children with normal hearing (Byrne, 1983; Cameron, Dillon, & Newall, 2006; Elliott et al., 1979; Jamieson, Kranjc, Yu, & Hodgetts, 2004; Litovsky, 2005). Jamieson et al. (2004), for example, reported a decline in word recognition of children with normal hearing (aged 5–8 years) as signal-to-noise ratio (SNR) decreased. In a similar vein, children with impaired hearing who attained near-perfect scores at +12 dB SNR in word and sentence recognition tests deteriorated to chance performance when SNR decreased to –6 dB (Crandell, 1993; Finitzo-Hieber & Tillman, 1978). Further, younger children were found to have greater difficulty than older children and adults (Crandell & Smaldino, 2004; Johnstone & Litovsky, 2006). Infants and young children who have normal hearing, for example, required up to 25 dB better SNR than 8-year-old children to achieve similar recognition performance (Byrne, 1983; Nabelek & Robinson, 1982; Nozza, Rossman, Bond, & Miller, 1990). These behavioral data are consistent with neurodevelopmental research which suggests that even though absolute auditory sensitivity of infants is adultlike within the first year of life, auditory

Journal of Speech, Language, and Hearing Research • Vol. 52 • 1241–1254 • October 2009 • D American Speech-Language-Hearing Association 1092-4388/09/5205-1241

1241

resolution and attention continue to develop over the first decade of life (Ponton, Eggermont, Kwong, & Don, 2000; Werner, 1996). The use of frequency-modulated (FM) systems has been recommended for improving speech reception in noise by children with impaired hearing (Lewis, 1991; Madell, 1992). FM systems enhance SNR by sending a signal from a remote microphone directly to the ear of a listener using wireless transmission technology (Crandell & Smaldino, 2000). Although an SNR advantage of up to 20 dB could be obtained when only the signal from the remote microphone was amplified in the listener’s ears (Hawkins, 1984), the actual advantage would depend on the ratio of sensitivity between the wireless and the hearing-aid microphones. Further, effective use of wireless technology relies on a user’s ability to pass the FM transmitter to different talkers as listening venues and communication partners change. To achieve this for child users, vigilant monitoring by an adult at all times is necessary (Moeller, Donaghy, Beauchaine, Lewis, & Stelmachowicz, 1996). An alternative method to enhancing speech reception in noise is to use directional microphones in personal hearing aids (e.g., Walden, Surr, Cord, Edwards, & Olson, 2000). When there are moderate levels of noise and reverberation, directional microphones improve SNR of speech in front by attenuating sounds from nonfrontal sources if the talker is close to the listener. From physical principles, it is clear that if a listener faces a talker, if the talker–listener distance is not too much larger than the critical distance of a room, and if sufficient noise or reverberation is present, directional microphones will offer an improved SNR to a listener. In laboratory-based settings, directional advantages have been demonstrated by presenting speech at 0° azimuth and a competing sound at 180° azimuth. For adults with moderate to severe hearing loss, the SNR improvement due to directionality ranged from 4 dB to 8 dB (Hawkins & Yacullo, 1984; Pumford, Seewald, Scollie, & Jenstad, 2000; Valente, Fabry, & Potts, 1995; see Bentler, 2005, for a review). For children over 4 years of age with mild to moderate–severe hearing loss, word and sentence recognition improved by up to 5 dB when listening in the directional mode compared with the omnidirectional mode (Gravel, Fausel, Liskow, & Chobot, 1999). In simulated classroom situations where speech was presented from the front and noise from four corners of a test room, children between 10 and 17 years of age with moderate hearing loss obtained directional benefits of 3 dB SNR in sentence recognition (Ricketts, Galster, & Tharpe, 2007). In real-world settings, on the other hand, directional benefit has been more difficult to prove (Cord, Surr, Walden, & Olson, 2002; Palmer, Bentler, & Mueller, 2006; Surr, Walden, Cord, & Olsen, 2002; Walden, Surr, Cord, &

1242

Dyrlund, 2004). In a study of children aged between 10 and 17 years, Ricketts, Galster, and Tharpe (2007) indicated that children did not rate directional and omnidirectional modes in their personal hearing aids differently in a range of real-life situations. This is not surprising because factors including head orientation of the listener, distance between talker and listener, relative location of signal and competing noises (Ricketts, 2000), and reverberation of the listening environments (Ricketts & Hornsby, 2003) influence the effectiveness of directional microphones for improving listening in noise (Ricketts & Galster, 2008; Ricketts et al., 2007; Ricketts & Hornsby, 2007; Walden et al., 2004). Even though there were no data on talker–listener distances and acoustics of the everyday environments of children, the benefit obtained by school-aged children in laboratory-based settings have led to the provision of directional microphones to older children (e.g., Bohnert & Brantzen, 2004; Condie, Scollie, & Checkley, 2002; Kuk, Kollofski, Brown, Melum, & Rosenthal, 1999). Support for this practice is provided by recent research on head orientation of school-aged children in school settings which indicated that the children often looked at the primary talker in multitalker situations (Ricketts & Galster, 2008). On the other hand, there was no research on head orientation of young children, on their activities in real-life settings, or on the acoustics of these settings. Data from neonatal studies, nevertheless, suggest that infants turn their eyes or heads toward a sound source in the correct hemifield within the first month of life (e.g., Ashmead, Clifton, & Perrin, 1987; Muir & Field, 1979; Muir, Clifton, & Clarkson, 1989), and this behavior is often accompanied by visual search by about 4 months of age (Muir et al., 1989; Smith, Quittner, Osberger, & Miyamoto, 1998). If infants and young children look in the direction of a primary speech source in everyday situations, then directional processing in personal hearing devices are likely to benefit infants and young children when they have to listen to speech in noise. The critical question is, therefore, not whether infants can orient to the sound source of interest, but how often they do so in real life. In addition to uncertainties about head orientation of young children in real life, there is the assumption that directionality may have potential detrimental effects on children’s access to sounds in the environment, thereby reducing opportunities for incidental learning. Due to the lack of evidence to either support or refute these beliefs, it has been common practice not to provide directional settings in hearing aids for infants and young children, even though they have greater difficulty listening in noise than older children and that there are known benefits of directionality for speech reception in noise. To optimize aided listening for young children in the early years of life, it is important to increase our

Journal of Speech, Language, and Hearing Research • Vol. 52 • 1241–1254 • October 2009

understanding of not only the auditory behavior and listening environments of young children in real life but also the magnitude of directional benefits and deficits for young children in their naturalistic settings. The information, rather than common beliefs, underpins an evidence-based approach to audiological management of children with impaired hearing and counseling of their families and professionals. Information on whether age and presence of hearing loss affect young children’s head orientation behavior in real life is also crucial in determining candidacy for directional microphones. Although recent data from children between 4 and 17 years of age suggest that neither the presence of hearing loss nor age affects the accuracy with which children orient toward a primary speech source in didactic classroom situations (Ricketts & Galster, 2008), children’s auditory behavior in structured classroom settings might not be generalizable to unstructured real-life situations (Vidas, Hassan, & Parnes, 1992). Furthermore, there were no data on children younger than 4 years of age in any setting. The purpose of this study was to examine the effect of directional microphone technology in hearing aids for infants and young children in naturalistic settings. To determine children’s head orientation, video recordings of children in different real-life situations were analyzed. This method has been used in previous studies to document head orientation of school-aged children in classrooms (Ricketts & Galster, 2008). To quantify the relative benefit or deficit associated with omnidirectional and

directional microphones, the Speech Transmission Index (STI; Steeneken & Houtgast, 1980) was used. The STI has been found to be effective in predicting changes in speech understanding due to effects of directivity and acoustic environments (e.g., Houtgast, Steeneken & Plomp, 1980; Ricketts & Hornsby, 2003). To obtain information about the daily activities of children, a diary with a questionnaire designed for this research was used.

Method Participants Eleven children with normal hearing (7 boys and 4 girls) and 16 children with impaired hearing (13 boys and 3 girls) between the ages of 11 months and 78 months participated in this study. Figure 1 shows the distribution of age of participants. Children who had specific challenges in addition to hearing impairment (e.g., visual impairment, autism, cerebral palsy) were excluded. Hearing thresholds were assessed by using either visual reinforcement orientation audiometry or play audiometry, depending on the age of the child. Tympanometry was measured for all children. For children with normal hearing, ear-specific screening at 1 and 4 kHz (at 30 and 25 dB HL, respectively) was performed using standard procedures. For children with impaired hearing, hearing thresholds were determined at audiometric frequencies between 0.5 and 4 kHz. In instances where the most recent audiograms were measured within 6 months of

Figure 1. Distribution of age of participants.

Ching et al.: Directional Effects for Young Children

1243

Table 1. Mean hearing threshold levels (HTLs) of children with hearing loss in dB HL, standard deviation (SD), and range. Frequency (kHz) Ear Left

Right

M SD Range M SD Range

0.25

0.5

1.0

2.0

4.0

40.8 6.6 35.0–50.0 44.3 8.4 35.0–55.0

48.8 12.5 25.0–70.0 48.7 12.3 30.0–75.0

58.6 16.3 25.0–85.0 56.9 18.5 20.0–90.0

62.1 13.9 45.0–95.0 63.5 13.0 40.0–90.0

60.8 18.2 40.0–105.0 59.1 16.0 35.0–85.0

enrollment in the study, these records were used and thresholds were not remeasured. Table 1 shows the audiometric thresholds of children with hearing loss. The mean three-frequency average (0.5, 1, 2 kHz) hearing threshold was 56.3 dB HL (range = 33–83). The children wore behind-the-ear hearing aids with wide dynamic range compression, adjusted according to the NAL-NL1 prescription (Byrne, Dillon, Ching, Katsch, & Keidser, 2001). None of the children had any experience of directional microphone technology in their personal hearing aids. All children used an aural /oral mode of communication.

Measures Video recordings. Video recordings of children in four everyday scenarios were used to estimate the frequency of children’s head orientation to primary speech sources. The scenarios included (a) when a child was interacting directly with a parent /caregiver in a play situation; (b) when the child was not interacting directly with adults who were present in the same room; (c) when the child was indoors with other children and adults; and (d) when the child was outdoors with other children and adults. In the three latter scenarios, conversational speech was not always directed to the child. For each child, the locations and activities for each scenario were selected by the parent/caregiver to be representative of their child’s daily routines in terms of activities and other sound sources that naturally occurred during such activities. There was no information as to whether these naturalistic environments chosen by individual parents were noisier or quieter than average. Two digital video cameras (Sony DCR-HC21E PAL with wideangle lenses) mounted on tripods were set up opposite each other to simultaneously record a child’s activities from different angles. Because children moved around in the environment, the absolute azimuths of the cameras varied in relation to the target participant(s) during recording. A 15-min sample of each child in each of the four scenarios in the child’s natural environments was collected. During video recording, the researchers observed the child’s proximity to the primary talker in each scenario

1244

and estimated the “best” and “worst” cases for directionality. These corresponded respectively to the smallest and the greatest distance between talker and listener at which the child spent at least 10% of the time. The sound pressure level (SPL) of the talker’s speech at the child’s location for the “best” and “worst” cases was measured by using a sound level meter (BK 2235; Bruel & Kjær [B&K], Nærum, Denmark). These levels were used to set the stimulus levels for subsequent STI measurements. STI. The STI was measured in the same location where video recordings were made. For Scenarios 1 and 2, the STI measurements were carried out with the talker and the child participant absent. During measurement, an international long-term average speech spectrum– weighted maximum length sequence (MLS) signal generated by the B&K Dirac software (Version 3.1) was presented from a loudspeaker located at the position of the talker. The presentation level of the stimulus was adjusted to match the speech levels of the primary talker measured during the video recording sessions. The MLS signal was recorded from two microphones in a behind-theear hearing aid case mounted on the right ear of a Knowles Electronics Manikin for Acoustic Research (KEMAR) head located in the same room at the respective distances from the sound source as determined for the “best” and “worst” cases described previously. The outputs of the two microphones were connected to an RME Hammerfall (RME, Haimhausen, Germany) sound card via a pre-amplifier, and the sound card was connected via a firewire cable to a laptop computer (Hewlett-Packard Compaq nc6000; Hewlett-Packard, Palo Alto, CA), where the hearing aid microphone signals were recorded within the B&K Dirac software. Recordings of signal levels when the KEMAR head was 0o, 90o, 180o, and 270o relative to the loudspeaker were made in order to quantify the directional benefit or deficit associated with the child facing directly toward, sideways to the left and right, and directly away from the primary talker. In each condition, measurements were replicated 5 times, and the averaged value was used. The same process was repeated for Scenarios 3 and 4, but with participants present and involved in activities similar to those in the video recording wherever

Journal of Speech, Language, and Hearing Research • Vol. 52 • 1241–1254 • October 2009

possible so that the sounds that occurred naturally in those scenarios could be captured in the acoustic measurements. Nonstimulus presentations at 0° azimuth were also recorded to establish the noise floor. The measurements from each microphone were recorded as a stereo impulse response file on the computer and postprocessed with a MATLAB script to generate omnidirectional and directional responses (see Figure 2 for polar plots measured on KEMAR). Directionality was optimized via amplitude matching of the microphones, with the required correction first determined in a “lookahead” procedure then applied to the whole signal before analysis. This ensured that the directional response reflected the optimal directionality that would be obtained in the field after the first 2 s of use (for hearing aids that automatically correct for differences in microphone sensitivity). The resulting impulse responses were analyzed with the Dirac software. For each child, STIs were calculated for the “best” and “worst” cases in each scenario at each “look-direction” (0o, 90o, 180o, and 270o azimuth) in both the omnidirectional and directional modes. Reverberation. The reverberation of the rooms in which Scenarios 1, 2, and 3 occurred was measured by using a sound-level meter located about two-thirds of the way into the room. The sound-level meter was connected to the same RME Hammerfall sound card as that used for the STI measurement. Pure-tone sweeps were emitted from a loudspeaker located about one third of the way into the room, with the stimulus peak set high enough to exceed 40 dB above the ambient noise level (or the upper level of comfort for the researchers) but low enough to avoid saturation of the sound card in the computer. Frequency-specific values for T20 (the time it would take

the stimulus intensity to fall by 60 dB, based on the first 20 ms of the decay curve) were calculated by using the Dirac software. Diary of daily situations. The parent /caregiver of each child was asked to document up to 10 listening situations that accounted for approximately 80% of the child’s daily routine over a week. The data were used to estimate the proportion of time in which children were likely to be in everyday situations where the use of directional microphones may be beneficial.

Procedure Prior to each video recording session, screening tympanometry was performed for each child. If the tympanometry results differed from previous records, the experimenters checked with the caregiver about the child’s hearing behavior. Video recording proceeded if there were no reports of changes in the child’s responsiveness to sounds. Otherwise, the recording session was rescheduled. If a child with impaired hearing had not been wearing hearing aids over several days prior to a video recording session, recording was postponed until 2 weeks after hearing aid use was re-established. For all children who used hearing aids, a listening check of their devices was performed prior to video recording to ensure that the hearing aids were functioning properly. The parents/caregivers were given a diary to document their child’s activities in their natural environments over a 1-week period.

Data Analyses Video recording. The video recordings were transferred to a computer for analysis. The video streams recorded

Figure 2. Frequency-specific polar plots for an omnidirectional mode and a directional processing mode, measured in an anechoic chamber. The microphones were housed in a behind-the-ear hearing aid case on KEMAR, showing effects of head diffraction. The difference in directivity index between the two modes is 3.5. AI-DI = articulation index-weighted directivity index.

Ching et al.: Directional Effects for Young Children

1245

on the two cameras were synchronized by using a recorded hand clap and were viewed by using a commercial video package (Adobe Premier, Version 2.0). A purposedesigned application was developed to determine the proportion of time when (a) speech was produced by a primary talker, and during that time; (b) the child looked in the direction of the talker, designated 0° azimuth, within ±45°; (c) the child oriented sideways from the talker location, 90° or 270° azimuth, within ±45°; and (d) the child oriented rearward from the talker location, 180°, within ±45°. Ten minutes of the 15-min recording of each scenario for each participant was analyzed. To simplify the task of logging head orientation in real time and to increase reliability of the analyses, we decided to analyze the child’s look-direction in the horizontal plane only. In general, average hearing aid directivity is minimally affected by deviations of 20° (Ricketts & Dittberner, 2002), and significant elevation of perhaps 45° or more will diminish the benefit when the child is looking toward the source and will diminish the disadvantage when the child is looking away from the source. The net effect will, therefore, be very small. During the analyses, the recording was viewed first to log the time when speech was present and second to determine the head orientation of the child during that time period. For analysis purposes, speech was logged as “present” whenever a primary talker could be identified, regardless of whether speech was directed to the child or to another adult. The latter represented opportunities for overhearing. Instances in which either the child was vocalizing to himself / herself or multiple talkers were speaking but no one directed speech to the child were excluded from analyses. All video analyses were carried out by the second author.

Diary entry. The activities described by parents were grouped into two categories: indoors and outdoors. Within each group, activities were categorized into one-to-one situations, group situations, and solitary play. STI. The STIs for omnidirectional and directional modes of processing were calculated, and the measures based on the “best” and “worst” estimates were averaged. To convert STIs to dBs, the STIs (ranging from 0 to 1.0) were multiplied by 30, based on the Speech Intelligibility Index model (American National Standards Institute [ANSI], 1997). The benefit or deficit due to directionality was computed by taking the difference between the omnidirectional and the directional modes. To estimate the effect of directionality when a child oriented sideways (either left or right) relative to the location of the talker, the values at 90° and 270° were averaged. Effects for head orientation to the front or the rear of the talker were based on values at 0° and 180°, respectively. The effects of directionality, hearing status (normal hearing or impaired hearing) of children, age of children, and scenario were assessed by repeated measures analyses of variance (ANOVAs). Post hoc analyses of significant findings were carried out by using the Tukey’s honestly significant difference test. For all analyses, a significance level of 5% ( p < .05) was adopted.

Results Head Orientation A preliminary analysis was carried out to determine the proportion of time during a 10-min sample of recording when speech was deemed “present.” Table 2 shows the descriptive statistics for children with normal hearing

Table 2. Mean, standard deviation (SD), and range of proportion of time when speech was deemed “present” in the video recordings of children in four scenarios: Scenario 1 (child participant interacting with caregiver indoors), Scenario 2 (child participant playing while 2 adults interacted in the same room), Scenario 3 (child participant with other children and adults indoors), and Scenario 4 (child participant with other children and adults outdoors). Scenario Hearing status NH

HI

n M SD Range n M SD Range

1

2

3

4

11 0.29 0.07 0.21–0.45 16 0.30 0.12 0.14–0.56

11 0.56 0.10 0.41–0.74 16 0.55 0.08 0.42–0.68

11 0.29 0.09 0.18–0.53 14 0.26 0.07 0.07–0.37

9 0.21 0.12 0.09–0.44 12 0.19 0.09 0.05–0.43

Note. n = number of children recorded in each scenario, both for children with normal hearing (NH) and those with hearing impairment (HI).

1246

Journal of Speech, Language, and Hearing Research • Vol. 52 • 1241–1254 • October 2009

and those with hearing impairment. ANOVAs with proportions as dependent variable, hearing status as a grouping variable, scenario as repeated measures, and age as a covariate indicated that speech was present in Scenario 2 (a child with more than one adult indoors) for a greater proportion of time than in other scenarios, F(3, 54) = 5.8, p < .002. Neither hearing status nor age of children has a significant effect on the proportion of time when speech was present ( p > .05). Given that speech was not necessarily directed to the child in Scenario 2, these data reinforce the need for amplification to support overhearing and incidental learning to occur in natural environments. Of main interest in this study is the proportion of time that children looked in the direction of the primary talker when speech was present. This is shown in Figure 3, separately for children with normal hearing and children with impaired hearing. On average, children oriented toward the talker for more than 50% of the time in one-to-one situations (Scenario 1) and between 30% and 50% of the time in situations where multiple talkers were present. ANOVAs were used to examine the effect of age and presence of hearing loss on the propensity of children’s head orientation toward a primary talker when speech was present. The proportion of time that children looked at the talker when speech was present was used as the dependent variable, scenario as repeated measures, age as a continuous variable, and hearing status (normal vs. impaired hearing) as a categorical variable. There were

no significant main effects or interactions ( p > .05), suggesting that age, over the range in the present study (11–78 months of age), did not significantly affect the proportion of time that children looked in the direction of the primary talker. On average, the same was true for children with normal hearing and those with impaired hearing.

Directional Effects STI measurements for 17 children (9 children with normal hearing and 8 children with impaired hearing) were available for analysis. To reduce time commitments for families participating in the study, STI measurements were dropped for some families. Mean effects of directionality in dB (STIdir – STIomni ) for different scenarios are shown in Figure 4. The directional advantage for children when they looked at the primary talker ranged from 1.7 dB to 3.1 dB (M = 2.4 dB) across scenarios, with the maximum benefit being obtained in play situations with other children and adults present (Scenario 3). As expected, there were negative effects when children did not face a talker. On average, the deficits associated with a speech source located sideways to a child (M = –1.8 dB; range: –1.5 dB to –2.8 dB) were slightly higher than those associated with a speech source located behind a child (M = –1.4 dB; range: –1.3 dB to –1.7 dB). The data clearly supported a directional advantage when the

Figure 3. The mean proportion of time children looked at the talker when speech was present in real-life situations. Filled circles represent data from children with normal hearing, and open squares represent data from children with impaired hearing. Vertical bars denote confidence intervals of 0.95. NH = normal hearing; HI = hearing impaired.

Ching et al.: Directional Effects for Young Children

1247

Figure 4. The mean directional effect, expressed in terms of dB, when children oriented frontward, sideward, and rearward to the primary talker in four scenarios (SC1–SC4) in real life.

Children’s potential benefits and deficits in real life from the use of directionality in personal hearing devices may be estimated by computing an overall effect, based on the sum of the effect in dB weighted by the proportion of time children oriented frontward, sideward, and rearward, relative to the talker location. Essentially, the overall advantage was calculated by the following formula: WAdir ¼ dBfront  %front þ dBside  %side þ dBback  %back ; ð1Þ

children looked in the direction of the speech source and a disadvantage when the speech source was either sideways or behind the children. This was expected when comparing the polar patterns of the omnidirectional and directional modes of processing measured at the behindthe-ear hearing aid mounted on the right ear of KEMAR (see Figure 2). As shown in the figure, the lowest sensitivity occurred for inputs at 240° azimuth (back and to the left) due to diffraction effects of the head. Averaged across participants, the physical effects due to directionality ranged between –2.6 dB and 3.1 dB across the four scenarios.

where WAdir is weighted advantage for directionality, dBfront is the advantage in dB when children faced the talker, and %front is the proportion of time children faced the talker. In the formula, dBside and %side represent values averaged over 90° and 270°—that is, children oriented sideways either to the left or the right of the talker; and dBback and %back represent values when the talker was behind the child. Figure 5 shows the overall advantage in real life, revealing that the mean effect was within ±0.5 dB in each of the four scenarios.

Daily Activities in Real Life Diaries on 24 children (10 children with normal hearing and 14 children with impaired hearing) were available for analysis. The activities were grouped into “indoors” and “outdoors.” Within each group, the activities were categorized according to nature of activity into “one-to-one,” “group play,” and “solitary play.” Figure 6 shows the activities of children over a 1-week period, as reported by their parents or caregivers. In addition,

Figure 5. The overall effect of directionality in real life, expressed in terms of dB weighted by the proportion of time children oriented toward or away from the primary talker (see Equation 1 in main text). Vertical bars denote confidence intervals of 0.95.

1248

Journal of Speech, Language, and Hearing Research • Vol. 52 • 1241–1254 • October 2009

Figure 6. Daily activities of children as reported by parents or caregivers. Filled circles represent data from children with normal hearing, and open squares represent data from children with impaired hearing. Significant differences (p < .05) between groups are marked by asterisks.

the diaries revealed that “traveling in a car” occurred for 1 normal-hearing and 4 hearing-impaired children 8–20 times per week, lasting between 20 and 90 min per day. To examine whether the daily activities of children varied with hearing status, ANOVAs were carried out with activity category (six groups) as repeated measures and hearing status as a between-group factor. A significant main effect of activities was found, F(5, 105) = 4.5, p < .001, with “group play indoors” being the most frequent activity of children (M = 0.46, range = 0.37–0.54). The interaction between activity and hearing status was significant, F(5, 105) = 2.3, p = .047. Compared with children with normal hearing, children with impaired hearing engaged in more one-to-one play ( p < .001) but less group play indoors ( p < .001) and in more solitary play ( p < .001) but less group play outdoors ( p < .001). It may be surmised that the higher proportion of one-to-one situations is related to the rehabilitation needs of children with impaired hearing, whereas the lower proportion of group play activities is likely to be a reflection of communicative difficulties that children with hearing impairment encounter in these activities in real life. About 70% of the reported activities took place indoors, many of which occurred in the rooms where video recordings were made. This was expected because the activities and locations for video recording were selected by parents to be representative of their child’s daily activities. The mean reverberation time of the rooms averaged

over 0.25 kHz–4 kHz was 0.41 s (range: 0.26 s to 0.78 s; SD = 0.16 s). The diary entries revealed that the talker– listener distance in one-to-one and group play situations was within 2 m and mostly within 1 m. For these ranges of reverberation times and talker–listener distances, children are likely to obtain directional benefit due to the reduction of noise and reverberation relative to the direct sound from the talker (Hawkins & Yacullo, 1984; Ricketts & Hornsby, 2003).

Discussion The main purpose of this study was to determine the effects of directional microphone technology on young children in real life. The approach involved, first, examination of how often young children looked in the direction of the talker of interest in everyday life; second, quantification of directional effects in real-life situations for different head orientations; and third, determination of the proportion of children’s activities in real life in which they are likely to benefit from directionality. The findings reveal that children between 11 months and 78 months of age did look at the talker of interest in everyday life for more than 50% of the time when speech was present during direct interactions with a caregiver and for more than 40% of the time when speech was present in scenarios with other children and adults, and speech was not always directed to the child participants.

Ching et al.: Directional Effects for Young Children

1249

These two types of situations constitute about 20% and 50%, respectively, of children’s daily activities, as reported by parents or caregivers. Quantification of directional effects in terms of STI suggests that, on average, up to a 3-dB advantage was obtained when children looked at the talker but a 2.8-dB deficit was obtained when children looked away from the talker. These effects apply to the set of environments randomly chosen by parents of individual children to be representative of the daily activities of their children. The STI measurements relate to omnidirectional and directional microphone processing of recordings made on KEMAR. The difference in directivity between the two microphone configurations was 3.5 (see Figure 2). The effect of using a system that has a higher directivity index than the one used in the present study is to slightly increase the frontal benefit and to slightly increase the sideways disadvantage than what was measured. Weighting these small increases by the proportion of time children look in the right direction in real-life situations is unlikely to lead to an overall effect of directionality that is different from the current estimates. A plausible inaccuracy in the experiment is that the STI measurements were made on the head of KEMAR, which is slightly larger than a child’s head. We do not think that this error will materially affect the data. The average head diameter of a 12-month-old is only 20% smaller than that of KEMAR. The effect of decreasing head diameter is to move the polar pattern a little toward the pattern that applies when the hearing aid is measured in isolation in the free field—that is, the polar responses on a small child’s head will be slightly more directed toward the front than it is on KEMAR, where the sensitivity maximum is directed partly to the side. Consequently, the child should have slightly more frontal benefit and slightly less sideways disadvantage than we measured with KEMAR. As these are opposing small changes, we do not think it conceivable that the conclusions would have been different had a child-sized manikin been used. When the measured directional effects were weighted by the proportion of time children oriented toward or away from the talker, the overall advantage/deficit in real life was effectively zero. This overall effect is likely to have underestimated the overall advantage of directionality for children with impaired hearing. If directionality were activated in hearing aids fitted to children early in life, caregivers may be counseled to help children maximize directional benefits by encouraging them to orient toward a talker in noisy situations more often. It is possible, even likely, that when children wear directional microphones, they will increase the proportion of time that they look at the talker. Fitting directionality early also helps to reduce children’s exposure to noise over a lifetime of hearing aid use, on the assumption that

1250

the omnidirectional and directional microphones have the same sensitivity for frontal sound sources. Furthermore, the need for providing directional amplification binaurally is even greater for children who demonstrate a deficit in binaural processing, which has been found in many children with hearing loss from birth. Unlike children with normal binaural processing abilities, these children are not able to take advantage of the spatial separation between speech and noise to improve speech intelligibility (Ching, van Wanrooy, Hill, & Incerti, 2006; Litovsky, Johnstone, & Godar, 2006). In advocating the activation of directional settings in hearing devices for young children, two clinical issues need to be considered. The first relates to compensation for low-frequency gain to achieve prescriptive targets, and the second relates to the frequency range of directivity. When directional processing in hearing aids is activated, signals sampled at two microphone locations are subtracted after applying some form of internal delay to the signal from the rearward location (for a detailed description, see Ricketts & Mueller, 1999). In addition to reducing outputs of sounds that originate from nonfrontal sources, directional processing also affects the frequency response of sounds arriving from directly in front of the listener. Because low-frequency signals sampled at two points near each other in space will be more similar in phase than high-frequency signals, directional microphones are less sensitive in the low frequencies than their omnidirectional counterparts (Ricketts, 2001; Thompson, 2003). This reduction of low-frequency gain associated with directional processing is commonly referred to as directional roll-off. Gain compensation in the directional mode that results in a frequency response equal to that of the omnidirectional response (equalized) has been recommended to offset the potential reduction of lowfrequency audibility in the directional mode (Christensen, 2000; Ricketts, 2001). This appears to be necessary to avoid decrement in speech recognition for adult listeners whose low-frequency hearing thresholds exceeded 40 dB HL (Ricketts & Henry, 2002). Although there are no experimental data on whether the same applies to children, there is no reason why their need for low-frequency gain compensation to achieve adequate audibility should be different. Therefore, when activation of directionality causes low-frequency targets to not be achieved, it may be necessary to consider providing directionality in some restricted frequencies. Some commercial hearing aids have implemented directional schemes that limit directivity to certain frequency bands. For example, one scheme limited directivity to low frequencies with the goal of preserving high-frequency audibility. This scheme was evaluated by Ricketts et al. (2007) for child listeners in simulated classroom environments. It was found that low-frequency

Journal of Speech, Language, and Hearing Research • Vol. 52 • 1241–1254 • October 2009

directivity neither reduced the decrement in speech intelligibility measured with full-band directivity when the talker was behind a listener nor brought about the directional benefit obtained with full-band directivity when the speech source was located to the front of the listener. Several hearing aids that are now available combine directivity in the high frequencies with omnidirectional processing in the low frequencies. Although this configuration is motivated by the desire to simulate the directivity pattern of normal unaided hearing, a side benefit is that it does not reduce low-frequency audibility. The STI measurements reported in this study reflect effects of full-band directionality on speech arriving from different angles across a wide range of frequencies, and no quantification of the effects of band-limited directivity on children in real life is currently available. The effect on SNR is, however, clear: Reducing the frequency extent of directivity reduces both the SNR enhancement possible when the talker is frontal and the SNR decrement possible when the talker is at the sides or to the rear. Because directional microphones work by reducing output of signals originating from nonfrontal sources, children benefit from directional microphones whenever they look at the talker close to them in noisy environments. When the talkers are either on their sides or behind them, directional microphones degrade speech intelligibility, as shown in this study on young children (see Figure 4) and also in a previous study on older children (Ricketts et al., 2007). In very noisy environments, directional microphones will help only if the child is aware of the need to look in the right direction. There are also other situations when directionality is not desirable—specifically, when the hearing aid user is in a very quiet environment or when there is wind noise. Using directionality in quiet environments would improve the ratio of direct-to-reverberant sound. However, a hearing aid with a directional response has a higher level of internal noise than if it had an omnidirectional response because additional amplifier gain is applied in the directional response to the low frequencies to compensate for the reduction in sensitivity at these frequencies. This amplifies not only the input signal but also the microphone’s internal noise. Even when the internal noise of the microphone is low enough to be below the user’s hearing threshold in the omnidirectional mode, the amplified internal noise could become audible and objectionable to the user in a directional mode when used in very quiet environments. Furthermore, in the presence of wind noise caused by turbulent airflow around a head and a hearing aid, the wind noise that is close to the hearing aid microphone is sensed with higher sensitivity than the target signal further away in the field when a directional pattern is used, thereby resulting in reduced speech intelligibility when the user is in outdoor environments.

Previous studies on adults have indicated a preference for omnidirectional mode in quiet environments (Kuk, 1996) and switchable as opposed to full-time directionality in real life (Mueller, Grimes, & Erdman, 1983; Preves, Sammeth, & Wynne, 1999); studies have also observed greater hearing aid benefits in noisy listening situations with switchable modes than with fulltime directionality (Ricketts, Henry, & Gnewikow, 2003). Although there are no experimental data on whether children have the same preferences, there does not seem to be any strong reason why their preferences should be different. The availability of both directional and nondirectional modes in hearing aids is crucial for supporting the development of young children. It has been reported that children continually monitor their environments and shift their attention to a third-party interaction when they hear something novel (Baldwin, 1993). This ability has been observed in young children in different cultural contexts and appears to play an important role in learning language (Akhtar, 2005; Rogoff, Mistry, Göncü, & Mosier, 1993) and social–cognitive skills (Forrester, 1993). In this study, the data on head orientation of children in Scenarios 2, 3, and 4 demonstrate that in situations where the children were not directly interacting with a talker, they oriented to the talker of interest about 40% of the time when speech was present. The diary entries reveal that about two-thirds of the daily activities of children (all except solitary play) occur in such situations where they will benefit from omnidirectionality for “monitoring” their environments but from directionality when they look in the direction of a talker of interest. Most commercial hearing aids have a way to switch from directional to omnidirectional patterns. Because young children are not able to operate manual switches— and because it is not practical for a vigilant adult to always observe the changing listening environments and needs of a child to make intelligent decisions about program switch—an automatic switching arrangement in devices for young children is recommended. Although there are currently no data to support the effectiveness of an automatic switching algorithm in this population or in any other population, such an arrangement is likely to (a) enhance SNR in noisy situations when the listener looks in the direction of the talker of interest and (b) perform similarly as an omnidirectional microphone in quiet environments (Bentler & Dittberner, 2003; Kuk, Keenan, Lau, & Ludvigsen, 2005). The net effect of automatic switching will be beneficial if it chooses directional mode more often when speech is coming from the front than when it is coming from the back and omnidirectional mode more often when speech is coming from the back than from the front. An automatic switching arrangement will be most effective if it takes into account not

Ching et al.: Directional Effects for Young Children

1251

just noisiness of the environment but also the direction of arrival of speech. A concern commonly expressed by clinicians is that directional microphones will prevent a child from hearing important, possibly life-saving, sounds from the rear. For two reasons, we think that this concern is unnecessary. First, directional microphones are just not that directional. If an attenuation of a few decibels for sounds from the rear makes such a critical difference, then equally the child would be disadvantaged every time a rearward source decreased its level by a few dB. Given the range of sound-pressure levels across sources, and the dynamics within individual sources, it seems very unlikely that a decrease in level of a few dB will have significant effects. (This, of course, is not the case when speech and noise are simultaneously present, when an increase in SNR of a few dB can have very worthwhile effects on intelligibility.) Second, when the dominant sound comes from the rear and is attenuated by a directional microphone, the hearing aid compressor will increase its gain, typically by around half of the level decrease. Consequently, the net decrease in output level is only about half of that caused by the directional microphone acting alone. This effect will occur for all sounds sufficiently intense to exceed hearing aid compression threshold. For very intense sounds that drive the hearing aid into limiting output, the effect is even more marked.

Limitations of the Study This study quantified the effects of directionality on speech intelligibility in real life by analyzing the auditory behavior and listening environments of children in naturalistic settings. The directional effects are likely to be an underestimation because none of the child participants with impaired hearing had any experience of directionality in their hearing devices. Future investigations will be necessary to examine the effect of fullband and band-limited directionality when implemented in personal hearing devices for young children in real life. Moreover, the effect of directionality on aspects other than speech intelligibility, such as localization, needs to be examined. Further research will also be necessary to devise intelligent schemes with directional processing that support the listening needs of young children.

Conclusion This study found that young children oriented to a primary talker of interest for more than 40% of the time in naturalistic situations in real life. Neither age nor hearing status has a significant effect on the proportion of time in which children looked at the primary talker. In real-life situations and averaged across children, directional microphones provided a directional advantage

1252

of up to 3 dB when a child faced a primary talker but a deficit of as low as 2.8 dB when the primary talker was sideways or behind the child. The overall advantage in real life, on average, was between –0.4 dB and 0.2 dB. This suggests that directional microphones are not detrimental and have the potential to provide significant benefits for young children in real life. The potential benefits may be enhanced by fitting directionality early and by counseling caregivers on ways to maximize benefits in real-life situations. This applies to current hearing aids. Further benefits could be realized if hearing aids can be made to automatically switch between directional and omnidirectional modes, depending not only on the noisiness of environments but also on the direction of arrival of speech from the dominant talker.

Acknowledgments Parts of this article were presented at the 29th International Congress of Audiology in Hong Kong, June 2008, and the Newborn Hearing Screening 2008 Conference in Cernobbio (Como Lake), Italy, June 2008. We thank Catherine Morgan for her assistance in video analyses. We are very grateful to all the children and their families for their participation in this study.

References Akhtar, N. (2005). The robustness of learning through overhearing. Developmental Science, 8, 199–209. American National Standards Institute. (1997). The calculation of the speech intelligibility index (ANSI-S3.5-1997). New York: Author. Ashmead, D. H., Clifton, R. K., & Perrin, E. E. (1987). Precision of auditory localization in human infants. Developmental Psychology, 23, 641–647. Baldwin, D. A. (1993). Infants’ ability to consult the speaker for clues to word reference. Journal of Child Language, 20, 395–418. Bentler, R. A. (2005). Effectiveness of directional microphones and noise reduction schemes in hearing aids: A systematic review of the evidence. Journal of the American Academy of Audiology, 16, 473–484. Bentler, R. A., & Dittberner, A. (2003). Better listening ahead as directional technology advances. Hearing Journal, 56, 52–53. Bohnert, A., & Brantzen, P. (2004). Experiences when fitting children with a digital directional hearing aid. Hearing Review, 11, 50–55. Byrne, D. (1983). Word familiarity in speech perception testing of children. Australian Journal of Audiology, 5, 77–80. Byrne, D., Dillon, H., Ching, T., Katsch, R., & Keidser, G. (2001). NAL-NL1 procedure for fitting non-linear hearing aids: Characteristics and comparisons with other procedures. Journal of the American Academy of Audiology, 12, 37–51. Cameron, S., Dillon, H., & Newall, P. (2006). The Listening in Spatialized Noise test: Normative data for children. International Journal of Audiology, 45, 99–108.

Journal of Speech, Language, and Hearing Research • Vol. 52 • 1241–1254 • October 2009

Ching, T. Y. C., van Wanrooy, E., Hill, M., & Incerti, P. (2006). Performance in children with hearing aids or cochlear implants: Bilateral stimulation and binaural hearing. International Journal of Audiology, 45(Suppl. 1), S108–S112. Christensen, L. A. (2000). Signal-to-noise ratio loss and directional-microphone hearing aids. Seminars in Hearing, 21, 179–200. Condie, R. K., Scollie, S. D., & Checkley, P. (2002). Children’s performance: Aanalog versus digital adaptive dualmicrophone instruments. Hearing Review, 9, 40–43, 56. Cord, M. T., Surr, R. K., Walden, B. E., & Olson, L. (2002). Performance of directional microphone hearing aids in everyday life. Journal of the American Academy of Audiology, 13, 295–307. Crandell, C. (1993). Speech recognition in noise by children with minimal degrees of sensorineural hearing loss. Ear and Hearing, 14, 210–216. Crandell, C., & Smaldino, J. (2000). Classroom acoustics and amplification. In M. Valente, R. Roeser, & H. HosfordDunn (Eds.), Audiology: Vol. II. Treatment (pp. 382–410). New York: Thieme Medical Publishers. Crandell, C., & Smaldino, J. J. (2004). Classroom acoustics. In R. D. Kent (Ed.), The MIT encyclopedia of communication disorders (pp. 442–444). Cambridge, MA: The MIT Press. Elliott, L. L., Conners, S., Kille, E., Levin, S., Ball, K., & Katz, D. (1979). Children’s understanding of monosyllabic nouns in quiet and in noise. The Journal of the Acoustical Society of America, 66, 12–21. Finitzo-Hieber, T., & Tillman, T. (1978). Room acoustics effects on monosyllabic word discrimination ability for normal and hearing-impaired children. Journal of Speech and Hearing Research, 21, 440–458. Forrester, M. A. (1993). Affording social-cognitive skills in young children: The overhearing context. In D. J. Messer & G. J. Turner (Eds.), Critical influences on child language acquisition and development (pp. 40–61). New York: St. Martin’s Press. Gravel, J., Fausel, N., Liskow, C., & Chobot, J. (1999). Children’s speech recognition in noise using omni-directional and dual-microphone hearing aid technology. Ear and Hearing, 20, 1–11.

Kuk, F. K. (1996). Subjective preference for microphone types in daily listening environments. Hearing Instruments, 49, 29–30, 32–35. Kuk, F. K., Kollofski, C., Brown, S., Melum, A., & Rosenthal, A. (1999). Use of a digital hearing aid with directional microphones in school-aged children. Journal of the American Academy of Audiology, 10, 535–548. Kuk, F., Keenan, D., Lau, C. C., & Ludvigsen, C. (2005). Performance of a fully adaptive directional microphone to signals presented from various azimuths. Journal of the American Academy of Audiology, 16, 333–347. Lewis, D. E. (1991). FM systems and assistive devices: Selection and evaluation. In J. Feigin & P. G. Stelmachowicz (Eds.), Pediatric amplification (pp. 139–152). Omaha, NE: Boys Town National Research Hospital. Litovsky, R. Y. (2005). Speech intelligibility and spatial release from masking in young children. The Journal of the Acoustical Society of America, 17, 3091–3099. Litovsky, R. Y., Johnstone, P. M., & Godar, S. P. (2006). Benefits of bilateral cochlear implants and /or hearing aids in children. International Journal of Audiology, 45(Suppl. 1), S78–S91. Madell, J. R. (1992). FM systems as primary amplification for children with profound hearing loss. Ear and Hearing, 13, 102–107. Moeller, P. P., Donaghy, K. F., Beauchaine, K. L., Lewis, D. E., & Stelmachowicz, P. G. (1996). Longitudinal study of FM system use in non-academic settings: Effects on language development. Ear and Hearing, 17, 28–41. Mueller, H. G., Grimes, A., & Erdman, S. (1983). Subjective ratings of directional amplification. Hearing Instruments, 34, 14–16. Muir, D., & Field, J. (1979). Newborn infants orient to sounds. Child Development, 50, 431–436. Muir, D., Clifton, R. K., & Clarkson, M. G. (1989). The development of a human auditory localization response: A U-shaped function. Canadian Journal of Psychology, 3, 199–216. Nabelek, A., & Robinson, P. K. (1982). Monaural and binaural speech perception in reverberation for listeners of various ages. The Journal of the Acoustical Society of America, 71, 1242–1248.

Hawkins, D. B. (1984). Comparisons of speech recognition in noise by mildly-to-moderately hearing-impaired children using hearing aids and FM systems. Journal of Speech and Hearing Disorders, 49, 409–418.

Nozza, R. J., Rossman, R. N., Bond, L. C., & Miller, S. L. (1990). Infant speech-sound discrimination in noise. The Journal of the Acoustical Society of America, 87, 339–350.

Hawkins, D., & Yacullo, W. S. (1984). Signal-to-noise advantage of binaural hearing aids and directional microphones under different levels of reverberation. Journal of Speech and Hearing Disorders, 49, 278–286.

Nozza, R. J., Rossman, R. B. F., & Bond, L. C. (1991). Infant-adult differences in unmasked thresholds for the discrimination of consonant-vowel syllable pairs. Audiology, 30, 102–112.

Houtgast, T., Steeneken, H. J. M., & Plomp, R. (1980). Predicting speech intelligibility in rooms from the modulation transfer function: I. General room acoustics. Acoustica, 46, 60–72.

Palmer, C., Bentler, R., & Mueller, H. G. (2006). Evaluation of a second-order directional microphone hearing aid: II. Self-report outcomes. Journal of the American Academy of Audiology, 27, 190–201.

Jamieson, D. G., Kranjc, G., Yu, K., & Hodgetts, W. E. (2004). Speech intelligibility of young school-aged children in the presence of real-life classroom noise. Journal of the American Academy of Audiology, 15, 508–517.

Ponton, C. W., Eggermont, J. J., Kwong, B., & Don, M. (2000). Maturation of human central auditory system activity: Evidence from multi-channel evoked potentials. Clinical Neurophysiology, 111, 220–236.

Johnstone, P. M., & Litovsky, R. Y. (2006). Effect of masker type and age on speech intelligibility and spatial release from masking in children and adults. The Journal of the Acoustical Society of America, 120, 2177–2189.

Preves, D. A., Sammeth, C. A., & Wynne, M. K. (1999). Field trial evaluations of a switched directional /omnidirectional in-the-ear hearing instrument. Journal of the American Academy of Audiology, 10, 273–284.

Ching et al.: Directional Effects for Young Children

1253

Pumford, J. M., Seewald, R. C., Scollie, S., & Jenstad, L. (2000). Speech recognition in a diffuse noise using in-the-ear and behind-the-ear dual-microphone hearing instruments. Journal of the American Academy of Audiology, 11, 23–35.

Smith, L. B., Quittner, A. L., Osberger, M. J., & Miyamoto, R. (1998). Audition and visual attention: The developmental trajectory in deaf and hearing populations. Developmental Psychology, 34, 840–850.

Ricketts, T. (2000). Impact of noise source configuration on directional hearing aid benefit and performance. Ear and Hearing, 21, 194–205.

Steeneken, H. J. M., & Houtgast, T. (1980). A physical method for measuring speech transmission quality. The Journal of the Acoustical Society of America, 67, 318–326.

Ricketts, T. A. (2001). Directional hearing aids. Trends in Amplification, 5, 139–176.

Surr, R., Walden, B., Cord, M., & Olson, L. (2002). Influence of environmental factors on hearing aid microphone preference. Journal of American Academy of Audiology, 13, 308–322.

Ricketts, T. A., & Dittberner, A. B. (2002). Directional amplification for improved signal-to-noise ratio: Strategies, measurement, and limitations. In M. Valente (Ed.), Hearing aids: Standards, options, and limitations (2nd ed., pp. 274–346). New York: Thieme Medical Publishers. Ricketts, T. A., & Galster, J. (2008). Head angle and elevation in classroom environments: Implications for amplification. Journal of Speech, Language, and Hearing Research, 51, 516–525. Ricketts, T. A., Galster, J., & Tharpe, A. M. (2007). Directional benefit in simulated -classroom environments. American Journal of Audiology, 16, 130–144. Ricketts, T. A., & Henry, P. (2002). Low-frequency gain compensation in directional hearing aids. American Journal of Audiology, 11, 29–41. Ricketts, T. A., Henry, P., & Gnewikow, D. (2003). Full time directional versus user selectable microphone modes in hearing aids. Ear and Hearing, 24, 424–439. Ricketts, T. A., & Hornsby, B. W. Y. (2003). Distance and reverberation effects on directional benefit. Ear and Hearing, 24, 472–484. Ricketts, T. A., & Hornsby, B. W. Y. (2007). Estimation of directional benefit in real rooms: A clinically viable method. In R. C. Seewald (Ed.), Hearing care for adults: Proceedings of the First International Conference (pp. 195–206). Chicago: Phonak. Ricketts, T. A., & Mueller, H. G. (1999). Making sense of directional microphone hearing aids. American Journal of Audiology, 8, 117–127. Rogoff, B., Mistry, J., Göncü, A., & Mosier, C. (1993). Guided participation in cultural activity by toddlers and caregivers. Monographs of the Society for Research in Child Development, 58(8), v–vi, 1–174, 175–179.

1254

Thompson, S. C. (2003). Tutorial on microphone technologies for directional hearing aids. The Hearing Journal, 56(11), 14–21. Valente, M., Fabry, D. A., & Potts, L. G. (1995). Recognition of speech in noise with hearing aids using dual microphones. Journal of the American Academy of Audiology, 6, 440–449. Vidas, S., Hassan, R., & Parnes, L. S. (1992). Real-life performance considerations of four pediatric multi-channel cochlear implant recipients. Journal of Otolaryngology, 21, 387–393. Walden, B. E., Surr, R. K., Cord, M. T., Edwards, B., & Olson, L. (2000). Comparison of benefits provided by different hearing aid technology. Journal of the American Academy of Audiology, 11, 540–560. Walden, B. E., Surr, R. K., Cord, M. T., & Dyrlund, O. (2004). Predicting hearing aid microphone preference in everyday listening. Journal of the American Academy of Audiology, 15, 365–396. Werner, L. A. (1996). The development of auditory behavior (or what the anatomists and physiologists have to explain). Ear and Hearing, 17, 438–446. Received December 18, 2008 Accepted May 10, 2009 DOI: 10.1044/1092-4388(2009/08-0261) Contact author: Teresa Y.-C. Ching, National Acoustic Laboratories, 126 Greville Street, Chatswood, New South Wales 2067, Australia. E-mail: [email protected].

Journal of Speech, Language, and Hearing Research • Vol. 52 • 1241–1254 • October 2009