Measurement Variation Across Health Literacy Assessments: Implications for Assessment. Selection in Research and Practice

Journal of Health Communication International Perspectives ISSN: 1081-0730 (Print) 1087-0415 (Online) Journal homepage: http://www.tandfonline.com/lo...
Author: Walter Simpson
8 downloads 1 Views 663KB Size
Journal of Health Communication International Perspectives

ISSN: 1081-0730 (Print) 1087-0415 (Online) Journal homepage: http://www.tandfonline.com/loi/uhcm20

Measurement Variation Across Health Literacy Assessments: Implications for Assessment Selection in Research and Practice Jolie Haun , Stephen Luther , Virginia Dodd & Patricia Donaldson To cite this article: Jolie Haun , Stephen Luther , Virginia Dodd & Patricia Donaldson (2012) Measurement Variation Across Health Literacy Assessments: Implications for Assessment Selection in Research and Practice, Journal of Health Communication, 17:sup3, 141-159, DOI: 10.1080/10810730.2012.712615 To link to this article: http://dx.doi.org/10.1080/10810730.2012.712615

Copyright Taylor and Francis Group, LLC

Published online: 03 Oct 2012.

Submit your article to this journal

Article views: 1382

View related articles

Citing articles: 17 View citing articles

Full Terms & Conditions of access and use can be found at http://www.tandfonline.com/action/journalInformation?journalCode=uhcm20 Download by: [37.44.207.166]

Date: 19 January 2017, At: 00:00

Journal of Health Communication, 17:141–159, 2012 ISSN: 1081-0730 print/1087-0415 online DOI: 10.1080/10810730.2012.712615

Measurement Variation Across Health Literacy Assessments: Implications for Assessment Selection in Research and Practice JOLIE HAUN AND STEPHEN LUTHER Veterans Administration HSR&D and RR&D Center of Excellence, Tampa Florida, USA

VIRGINIA DODD Department of Health Education and Behavior, University of Florida, Gainesville, Florida, USA

PATRICIA DONALDSON Department of Community Dentistry and Behavioral Science, University of Florida, Gainesville, Florida, USA National priorities and recent federal initiatives have brought health literacy to the forefront in providing safe accessible care. Having valid and reliable health literacy measures is a critical factor in meeting patients’ health literacy needs. In this study, the authors examined variation across three brief health literacy instruments in categorizing health literacy levels and identifying associated factors. The authors screened 378 veterans using the short form of the Test of Functional Health Literacy in Adults; the Rapid Estimate of Adult Literacy in Medicine; and a 4-Item Brief Health Literacy Screening Tool (known as the BRIEF). They analyzed data using prevalence estimates, Pearson product moment correlations, and logistic regression. When categorizing individuals’ health literacy, agreement among instruments was present for 37% of the sample. There were consistencies; however, categorization and estimated risk factors varied by instrument. Depending on instrument, increased age, low education, minority status, and self-reported poor reading level were associated with low health literacy. Findings suggest that these instruments measure health literacy differently and are likely conceptually different. As the use of health literacy screening gains momentum, alignment between instrument and intended purpose is essential; in some cases, multiple instruments may be appropriate. When selecting an instrument, one should consider style of administration, purpose for measure, and availability of time and resources.

This article not subject to US copyright law. The authors thank Mary Elizabeth Bowen, PhD, and Scott Barnett, PhD, for their careful review and constructive feedback regarding this manuscript during its development. This research was supported by the Tampa James A. Haley VA Medical Center, HSR&D & RR&D Research Center of Excellence and does not represent the views of Veterans Affairs. Address correspondence to Jolie Haun, Health Science Specialist, Veterans Administration HSR&D & RR&D Center of Excellence, 8900 Grand Oak Circle (118M), Tampa, FL 33637, USA. E-mail: [email protected]

141

142

J. Haun et al.

Health literacy is described as the capacity to obtain, process, and understand the basic health information and access services needed to make appropriate health decisions (Nielsen-Bohlman, Panzer, & Kindig, 2004). Health literacy skills include reading comprehension, listening, analyzing, and decision making. In addition, health literacy is a critical skill set that affects patients’ ability to communicate with health care providers, adhere to recommended treatments, and access and navigate services to manage health conditions. The Institute of Medicine (Nielsen-Bohlman et al., 2004) and Healthy People 2010 (U.S. Department of Health and Human Services, 2000) have identified health literacy as a national priority and a key factor in reducing disease and health disparities. Recent federal mandates and initiatives, including the Affordable Care Act of 2010, the Department of Health and Human Services’ National Action Plan to Improve Health Literacy, and the Plain Writing Act of 2010, will bring health literacy to the forefront in initiating safe accessible care to patients with health literacy needs (Koh et al., 2012). As these mandates are implemented, the importance of accurately assessing and measuring individual health literacy levels will continue to be a critical issue for appropriately responding to the health information needs of patients. While individuals with adequate health literacy typically have the skills to access, exchange, and use health information, individuals with low health literacy skills have difficulty managing such tasks and are at risk for poor outcomes (Nielsen-Bohlman et al., 2004). Any patient may need assistance accessing, exchanging, and using health information. However, on the basis of previous research, individuals at the highest risk for low health literacy include those who are older in age (Baker et al., 2002; Gazmararian et al., 1999; Howard, Gazmararian, & Parker, 2005; Sudore, Mehta, et al., 2006; Sudore, Yaffe, et al., 2006), of minority (Bennett et al., 1998; CooperPatrick et al., 1999; Cooper & Roter, 2003; Nielsen-Bohlman et al., 2004) and low socioeconomic status (Nielsen-Bohlman et al., 2004; Sudore, Mehta, et al., 2006), and less educational attainment (Arozullah et al., 2006; Nielsen-Bohlman et al., 2004; Sudore, Mehta, et al., 2006). Identifying instruments that efficiently and accurately measure health literacy skills, and appropriately identify associated sociodemographic risk factors is critical for supporting national priorities to improve health care and outcomes. Identification of instruments with these qualities is also important for advancing this field of science. Developing and validating brief health literacy instruments is an ongoing focus of inquiry (Baker, 2006; Baker, Williams, Parker, Gazmararian, & Nurss, 1999; Cheney & Nelson, 1988; Chew, Bradley, & Boyko, 2004; Chew et al., 2008; Davis, Crouch, et al., 1991; Davis, Long, et al., 1993; Nath, Sylvester, Yasek, & Gunel, 2001; Osborn et al., 2007; Parker, Baker, Williams, & Nurss, 1995; Powers, Trinh, & Bosworth, 2010; Rawson et al., 2010; Shea et al., 2004; Wallace, Rogers, Roskos, Holiday, & Weiss, 2006). Most instruments are designed to measure general health literacy (Chew et al., 2004; Chew et al., 2008; Davis, Crouch, et al., 1991; Davis, Long, et al., 1993; Nurss, Parker, Williams, & Baker, 2001; Osborn et al., 2007; Parker et al., 1995; Powers et al., 2010; Rawson et al., 2010; Wallace et al., 2006). However, other research has emphasized the development of field-specific instruments for use in areas such as dentistry (Lee, Rozier, Lee, Bender, & Ruiz, 2007; Richman et al., 2007), diabetes (Nath et al., 2001), and genetics (Erby, Roter, Larson, & Cho, 2008). The specificity of these instruments allows assessment of condition-specific knowledge and skills. Although condition-specific instruments are advantageous, they do not negate the need for a valid and reliable measure of general health literacy.



143

7 minutes 0–36 0–16 = inadequate 17–22 = marginal 23–36 = adequate Available in Spanish • Has to be administered and timed • Requires materials • REALM 0.61 • BRIEF 0.42

Time to administer Score range† Scoring

Self-report of health literacy skills 1–2 minutes 4–20 4–12 = inadequate literacy 13–16 = marginal literacy 17–20 = adequate literacy Quick and easy to administer Self-report, not performance based • S-TOFHLA 0.42 • REALM 0.40

• S-TOFHLA 0.61 • BRIEF 0.40

BRIEF*

Medical word recognition and pronunciation test 2–7 minutes 0–66 0–44 = limited (0–6th-grade level) 45–60 = marginal (7th–8th-grade level) 61–66 = adequate (9th-grade and above) Can be quick, uses medical terminology • Has to be administered • Requires materials

REALM*

*S-TOFHLA = Short Test of Functional Health Literacy in Adults; REALM = Rapid Estimate of Adult Literacy in Medicine; BRIEF = 4-item BRIEF Health Literacy Screening Tool. † Higher score signifies higher health literacy level. ‡ Correlations reported in Haun et al. (2009).

Correlations‡ (r value)

Advantages Limitations

Reading comprehension

S-TOFHLA*

Description

Variable

Table 1.  Comparison of characteristics of the S-TOFHLA, REALM, and BRIEF

144

J. Haun et al.

In the past, the short form of the Test of Functional Health Literacy in Adults (S-TOFHLA; Baker et al., 1999; Nurss et al., 2001; Parker et al., 1995) and the Rapid Estimate of Adult Literacy in Medicine (REALM; Davis, Long, et al., 1993; Davis et al., 2006) have been popular health literacy instruments. Despite noted limitations in the literature for the S-TOFHLA and REALM, widespread use of these instruments resulted in their recognition in the literature as reference standards for measuring health literacy (Baker, 2006; see Table 1). In addition to the S-TOFHLA and REALM other researchers identified a set of self-report items for assessing health literacy (Chew et al., 2004; Chew et al., 2008; Powers et al., 2010; Wallace et al., 2006). In 2004, Chew and colleagues tested the ability of 16 health literacy items to effectively screen for health literacy, and findings indicated predictive ability for 3 of the 16 items: 1. How often do you have someone help you read hospital materials? 2. How confident are you filling out medical forms by yourself? 3. How often do you have problems learning about your medical condition because of difficulty understanding written information? Subsequent research tested item generalizability using more diverse populations and settings (Chew et al., 2008; Wallace et al., 2006). Recently, a literature review of brief health literacy instruments published in JAMA described the items developed by Chew and colleagues as effective and efficient for identifying patient health literacy levels (Powers et al., 2010). Findings published by Griffin and colleagues (2010) suggest variation among the S-TOFHLA and REALM. This variation suggests greater accuracy of one instrument over the other and/or variable parameters across populations/settings and warrants further investigation. Literature determining the appropriate use of these new instruments in diverse settings is sparse, and, to date, these instruments are typically used interchangeably in research and clinical practice. While the advantages inherent in assessing individual levels of health literacy gain increasing attention, identification of available instruments, as well as understanding their unique associated characteristics increases in importance. Addressing variation across health literacy instruments is a critical issue since differences in construct measurement may affect measures of association between health literacy and other related variables. For example, variation in health literacy measure produces variation among categorization and nonstandard associated factors. Screening instrument characteristics, including operational measure and categorization of health literacy, need to be examined to inform proper use in research and practice. In this study, we examined the variation of health literacy categorization and risk factors associated with low health literacy across three health literacy instruments, the S-TOFHLA, REALM, and 4-Item Brief Health Literacy Screening Tool (BRIEF). Findings from this study inform the proper use of these instruments in research and practice.

Method Design and Sample During 2006, we administered a written survey to a cross section of veterans attending ambulatory clinics within eight rural and nonrural VA medical facilities within a VA



Variation Across Health Literacy Assessments

145

regional health care system in the Southeast United States (Haun, Noland Dodd, GrahamPole, Rienzo, & Donaldson, 2009). We selected sites based on availability of volunteer data collectors within the VA health care region. Inclusion criteria specified Englishspeaking veterans, at least 18 years of age, and ability to provide written participation consent. We did not assess vision, hearing, and/or cognitive disorders, such as dementia and Alzheimer’s. Participation in the study was voluntary and without compensation. Twenty-one trained volunteers collected data from patients during routine health care visits in an ambulatory care setting within the VA facilities. Volunteer data collectors consisted of the principal investigator, a nutritionist, dental technician, nurse educator, and 17 nurses. The co–principal investigator, a registered nurse employed by the VA Health Care System, generated an interoffice e-mail to recruit data collectors. Before data collection, research team members provided a 30-minute on-site training session for data collectors. We provided training for data collectors, including review of data packets, administration instructions and practice opportunities. We used published results, using the S-TOFHLA as the reference measure, to inform the power analyses for this study (Chew et al., 2004). Using a binary unconditional logistic regression modeling approach to predict adequate health literacy with primary fixed effects for age (per 10 years), female, non-Caucasian, and educational (high school or less) status; recruitment of 350 participants provided at least 85% power to detect a moderate effect size (OR = 2.0) assuming a Type I error rate of 5%, two-tailed and a 33% low health literacy rate (Nielsen-Bohlman et al., 2004). To ensure adequate sample size, we recruited a convenience sample of 378 Englishspeaking veterans attending ambulatory care clinics. The identified population consisted mostly of older adults, a group identified by the literature as at high risk for inadequate health literacy (Baker et al., 2002; Gazmararian et al., 1999; Howard, Gazmararian, & Parker, 2005; Nielsen-Bohlman, Panzer, et al., 2004; Sudore, Yaffe, et al., 2006; Williams et al., 1995). Trained data collectors recruited participants during routine clinical care encounters. Following consent, a data collector administered the REALM orally using a face-to-face interview format. Participants completed the S-TOFHLA (administration time 7 minutes), four BRIEF items, and a demographic information survey in written format. Each interview was approximately 20 minutes in length. Measures Each participant completed three health literacy screening instruments (REALM, S-TOFHLA, BRIEF) and 19 standard sociodemographic and health related items written at the seventh-grade reading level. Seven items assessed demographic characteristics: age, gender, race/ethnicity, education level, language (English as first or second language); home ownership (income proxy), employment/retirement/ functional status. Additional items assessed self-reported reading ability, reported on a 5-point Likert-type scale ranging from 1 (excellent) to 5 (poor); self-reported health status (based on three dichotomous indicators—diabetes, high blood pressure, stroke); and eight items to assess respondents’ self-reported ability to define health literacy, readiness to seek and access health information resources, and confidence in their ability to do so. These eight items were reported on a 5-point Likert-type scale ranging from 1 (strongly agree) to 5 (strongly disagree). Findings for the eight items assessing participants’ ability to define health literacy, seek information and perceived self-confidence seeking health information are not reported in this article.

146

J. Haun et al.

We used Paasche-Orlow and Wolf’s (2007) causal pathways model, associating individual levels of health literacy with sociodemographic factors and health status, to guide our selection of the 12 independent variables included in the analysis [sociodemographic (i.e., age, gender, race/ethnicity, education level, perceived reading level, home ownership (i.e., income proxy), employment status, retirement status, functional status and health status (i.e., high blood pressure, stroke, diabetes)]. Because 97% of the sample reported speaking English as their first language, we excluded this variable from the analysis. On the basis of the literature’s support for use of the REALM and S-TOFHLA, and increasing interest in the single health literacy items (known as the BRIEF in this study), we designated the REALM, S-TOFHLA, BRIEF scores as the dependent variables. The S-TOFHLA (Parker et al., 1995) consists of two prose passages with 36 fillin-the-blank response items worth one point each. Possible scores range from 0 to 36 and administration time is 7 minutes. Using the S-TOFHLA score individual health literacy skills are divided into three criterion levels: (a) inadequate (0–16); (b) marginal (17–22); and (c) adequate (23–36; Baker et al., 1999). Administration of the REALM requires respondents to verbally articulate three columns of 22 health-related terms. The words in each column appear in ascending order of difficulty. The REALM score is a summed value based on the number of correctly pronounced words in each column. REALM scores can range from 0 to 66 and classify literacy at three levels, limited (0–44); marginal (45–60); and adequate (61–66; Davis, Long, et al., 1993). The BRIEF instrument measures health literacy using four items: 1. How often do you have someone help you read hospital materials? 2. How confident are you filling out medical forms by yourself? 3. How often do you have problems learning about your medical condition because of difficulty understanding written information? 4. How often do you have a problem understanding what is told to you about your medical condition? The first three items are identified in the literature as effective for identifying individuals with inadequate/marginal health literacy skills (Chew et al., 2004; Chew et al., 2008; Wallace et al., 2006). The fourth item addresses spoken information, an identified gap in health literacy measurement in clinical practice. On the basis of results of a principal component analysis indicating the four items accounted for 60% of score variance by measuring one distinct construct, the fourth item was added to constitute the BRIEF health literacy screening instrument (Haun et al., 2009). The four-item BRIEF resulted in a 0.77 Cronbach’s alpha. Further analysis using the receiving operator characteristic curve indicated the combined four-item BRIEF had greater sensitivity when measuring inadequate health literacy than any of the four individual BRIEF health literacy screening items (Haun et al., 2009). The BRIEF response options offer 5-point Likert-type scales for each item [items 1, 3, and 4 (1 = always to 5 = never); and item 2 (1 = not at all to 5 = extremely)]. The BRIEF score is based on the sum of the four nonweighted items and can range from 4 to 20. BRIEF levels are categorized as follows: (a) inadequate (4–12); (b) marginal (13– 16); and adequate (17–20). The BRIEF levels are based on the scale used throughout the ongoing development of these items (Chew et al., 2004; Chew et al., 2008; Wallace et al., 2006). This study reports on an extended analysis of these data.



Variation Across Health Literacy Assessments

147

Data Analysis We calculated descriptive statistics including means, frequencies and proportions to provide preliminary statistical information, and Pearson product moment correlation coefficients to determine comparative validity between screening instruments in the study sample. We also calculated agreement between categorization of the respondents as having adequate, inadequate, or marginal health literacy. We dichotomized (adequate vs. inadequate/marginal) scores for each instrument (S-TOFHLA, REALM, BRIEF) and modeled separate binary logistic regression models to evaluate the relation between potential predictors and probability of inadequate/marginal health literacy. We initially categorized scores for each instrument into one of three levels. Since scores in the inadequate and marginal categories indicate risk for associated health outcomes, we combined scores for these categories resulting in a dichotomous variable to compare to the referent group (adequate). Because of small sample cell sizes, we collapsed the independent variables education, self-reported reading level, and ethnicity prior to analysis. We assessed outliers and collinearity using visual inspection. We also assessed collinearity using the variance inflation factor, with the variance inflation factor defined as 1/tolerance. We assumed a tolerance of ≤0.1 or equivalently a variance inflation factor ≥of 10, to be a cause for concern. Variance inflation factors ranged from 2 to 4; moderate collinearity was evident among variables age and retirement. However in this sample, as in the veteran population as a whole, retirement age has large variation. Given the sample size and strategy for separate modeling, we included equal parameters within all three final models. For the initial data analysis, we used a univariate approach for determining the unique contribution of each variable (e.g., statistical significance of p 

Suggest Documents