Assessing the Effectiveness of REMS: A Best Practices Approach to Knowledge and Understanding Surveys

paragonrx.com Assessing the Effectiveness of REMS: A Best Practices Approach to Knowledge and Understanding Surveys By: Karen Collins-Lenoir | Senior...
Author: Phebe Banks
1 downloads 0 Views 473KB Size
paragonrx.com

Assessing the Effectiveness of REMS: A Best Practices Approach to Knowledge and Understanding Surveys By: Karen Collins-Lenoir | Senior Director, Client Services ParagonRx, an inVentiv Health Company

An Overview of REMS Assessments Since September of 2007, the U.S. Food & Drug Administration (FDA) has had the authority to require that additional safety measures—beyond the product label—be applied to certain products. These measures are detailed in a Risk Evaluation and Mitigation Strategy (REMS) and are designed to ensure that a product’s benefits outweigh its risks. REMS program components can include a Medication Guide, Communication Plan, Elements to Assure Safe Use (ETASU), an Implementation System, and Timetable for Submission of Assessments. Each REMS must be assessed at specific intervals to determine if the program is effective and whether modifications are warranted. Although the timetables for assessing REMS vary from case to case, the minimum checkpoints are at 18 months, three years, and seven years from the REMS approval. Such assessments can include metrics related to:

» Material distribution: Did the communications » » » »

reach the intended stakeholders? Registry or program enrollment: How many providers and patients participated? Training, prescribing, and/or drug utilization: Have the prescribers participated in program training, and are they prescribing within label? Website usage: How many hits did it receive and what documents were downloaded? Knowledge and understanding of product risks: Do physicians and/or patients understand the risk messages?

REMS with a Medication Guide, Communication Plan, or ETASU require the manufacturer to conduct a survey measuring the extent to which physicians and/or patients are aware of and understand the serious risks of taking the drug and how it should be used.

» Putting Risk to Work

It’s been several years since the U.S. Food & Drug Administration (FDA) was granted authority to require that some products be marketed with Risk Evaluation and Mitigation Strategies (REMS). Yet, no one knows how well these programs are actually working when it comes to improving drug safety. REMS programs are, indeed, routinely assessed, but in the absence of specific guidelines, the methodologies that sponsors use are not standardized and are often flawed. The old adage, “You can’t control what you can’t measure” speaks to a concern that this poses for both regulators and sponsors. While the FDA is expected to issue further guidance on how REMS can best be measured, the process will always be challenging. Here we present best practices for sponsors to follow in developing and conducting knowledge and understanding surveys as part of their REMS assessment.

A Flawed System; Refinements Underway In 2012, about five years into the practice of REMS, the FDA reviewed a number of assessment reports and began to question the accuracy of many of the survey results due to the methodologies used. This is no doubt a reflection of the fact that the existing guidance has left manufacturers to their own devices in developing an acceptable measurement instrument and process. For example, they’ve had to decide on their own what constitutes an appropriate sample size, how best to reach and recruit respondents, how to proceed if they can’t fulfill their sample quota, and how to judge performance. Subsequent to the FDA’s review, the industry worked with the agency for two years to develop a harmonized set of recommended methodologies. Then, in March of 2014, a working group led by the International Society of Pharmacoepidemiology gave the FDA its

© ParagonRx. All Rights Reserved. October 2014

1

paragonrx.com recommendations for assessing REMS along seven dimensions: 1. Evidence-based program components: Were the program components pretested, and are they adaptable to the healthcare setting without compromising their effectiveness? 2. Stakeholder-centered components: Were the materials developed with input from the targeted stakeholders? 3. Implementation: Was the program developed and delivered as planned within the required timelines? 4. Reach and adoption: Did the program reach the target audience and setting? What was the adoption rate, and what were the barriers to adoption? 5. Effectiveness: Did the program achieve the outcome or behavior desired? 6. Maintenance and sustainability: Was the program delivered consistently over time? What has changed, and why? 7. Resource utilization: What amount of healthcare resources is necessary to launch and maintain a REMS? Meanwhile, the U.S. Office of the Inspector General (OIG) studied assessments performed between 2008 and 2011, and in early 2014 determined that it did not have sufficient data to demonstrate that REMS actually improved drug safety. In fact, no performance threshold has been set on which to judge REMS performance; no one has articulated what performance scores on assessment surveys are deemed acceptable. The OIG thus recommended that the FDA:

» Develop and implement a plan to identify, » » » »

create, validate, and assess REMS components Identify REMS that are not meeting their goals and take action Evaluate one REMS with an ETASU at least once per year Take measures to obtain any missing information from REMS assessment reports Clarify for manufacturers what the expectations are for assessments—for example, o Specify sample sizes or thresholds o Seek legislative authority to enforce assessment plans o Ensure that assessment reviews are timely and prioritize those with ETASUs

» Putting Risk to Work

At this writing, the agency is expected to issue new guidance for manufacturers on assessing REMS and has recently published a draft report, “Standardizing and Evaluating Risk Evaluation and Mitigation Strategies (REMS)”, in which the agency outlines its initiatives for REMS administration and work plans of projects in priority areas.

A Recommended Process for REMS Assessment Surveys Preparing for and conducting REMS assessment surveys should, ideally, be a five-step process, as illustrated in Figure 1. Figure 1: Knowledge, Awareness, and Behavior Survey Methodology

Step 1: Protocol and Survey Development The protocol is a comprehensive document that details the purpose and methodology used for the assessment. Specifics of the evaluation methodology should include the sample size and the confidence interval associated with it, recruitment methods, materials needed, analyses used to assess the results, and the survey itself. It is important to remember that the goal is to measure REMS program effectiveness – specifically knowledge and understanding of risk messages; therefore, the survey questions should be limited to those related to risk messages. The questions should be structured with multiple-choice answers (with an emphasis on simple yes/no, true/false choices), and incorporate foils, or incorrect answers, as well as “I don’t know” options. In addition, each survey question should be linked to a general topic (or domain area) for reporting purposes.

© ParagonRx. All Rights Reserved. October 2014

2

paragonrx.com For instance, three reporting domains for healthcare providers might include: 1. Understanding of the serious risks associated with the use of the product, 2. Understanding of the appropriate use of the product, and 3. Adherence to the REMS program. Figure 2 lists three sample questions, one in each domain. Figure 2: Sample Survey Questions; Linked to Reporting Domains

an incentive for participation. Generally, we’ve found that when these steps are taken, response rates fall in the 7-10 percent range. To be eligible, physician respondents should have written at least one prescription for the product, and they can be selected at random from prescription or enrollment data. Eligible patients (those who have been prescribed, taken, or administered a product) can be recruited directly via patient registries or pharmacy claims data or indirectly through prescribers known to have written a prescription for the product. Step 4: Analysis The statistical method used to measure understanding of product risks and the distribution of REMS elements is the binomial test (by which survey answers are categorized as either pass/fail, yes/no, or correct/incorrect). The percentage of correct responses to individual questions is analyzed using simple descriptive statistics, such as the mean and median. Analysts should examine the raw scores across reporting populations and reporting domains (question categories) as well. Inferential statistics can be applied to compare subgroups for differences. Step 5: Reporting

Step 2: Survey Pretesting Before the survey is administered on a large scale, it should be pretested with a relatively small sample of prescribers and/or patients to verify that the questions are written in a clear and understandable way. This step should uncover problematic wording and identify structural issues with the survey questions. Once the protocol and survey questionnaire are finalized, the document must be sent to the FDA for comment—a process that normally takes about 90 days for a response. Step 3: Stakeholder Recruitment For both survey participant groups—healthcare professionals and patients—the approach that will yield the greatest response rate is to use multiple forms of outreach such as fax, mail, and e-mail as well as to offer

» Putting Risk to Work

Survey results can be reported in terms of overall performance as well as performance by individual question and question categories. Where applicable, performance can be broken out by population, and, if feasible, the trend over time can be reported. The fact remains, however, that at the moment there are no clear standards of performance established; the FDA has not identified what it regards as an acceptable score, so interpreting findings is really a qualitative exercise. In our experience, healthcare professionals generally score in th th the low 80 percentile and patients in the mid 70 percentile.

Lessons Learned from Real-Life Assessment Scenarios The following five scenarios, all drawn from real-life, illustrate the degree to which REMS elements and timetables differ, but more importantly, offer some “lessons learned” that might be applicable in other similar circumstances.

© ParagonRx. All Rights Reserved. October 2014

3

paragonrx.com Figure 3 lists scenarios. Figure 3: A Variety of REMS Scenarios

assessment timetable every six months in year one and then annually thereafter. The knowledge and understanding survey results from physicians and patients, however, are required at a different interval (due at the two and four-year marks). Postponing the survey until year two allowed the products to achieve greater market penetration and improve the ability to recruit an appropriate number of survey respondents. The use of pharmacy claims data to identify a specific patient population helped to expedite patient sample fulfillment. Establishment of a sponsor sub-team to manage and adjudicate the REMS assessment requirements streamlined the processes for reviewing, approving, and submitting REMS documentation in an efficient and timely manner. Scenario 3: Product with Dual Indications and Single REMS Creates Management Complexity

Scenario 1: Comprehensiveness Leads to Early Release of REMS This scenario features a product approved with a REMS which includes a Medication Guide and Communications Plan with an annual timetable for assessments for the first five years and another assessment at year seven— a timetable more likely for a REMS with an ETASU. Several REMS modifications were made to include new risk messages and labeling; therefore, the survey itself needed to be revised accordingly for each assessment interval. While the survey changes meant that the sponsor could not complete a direct question-to-question comparison over time, it could—and did—interpret and trend the results by reporting domain or question category. Taking a conservative approach with the assessment timetable and providing a thorough annual, comparative analysis helped to support FDA’s decision to release the product from the REMS requirements after the fourth interval. Scenario 2: Sponsors Realize Efficiencies in a Shared REMS System In this scenario, products from multiple sponsors share a single REMS system with an ETASU and an » Putting Risk to Work

In a very unusual situation, a product with two indications was approved with a single REMS program. The REMS elements for the first indication includes only a Communication Plan, while the second indication has an ETASU. Meanwhile, the timetables for assessment reporting are also different, as are the patient and prescriber populations. Variations in the REMS components for the two indications led to complexity for the sponsor. Given the different elements, stakeholders, timetables, and metrics related to the two indications, it was critical for the sponsor to monitor reporting requirements to ensure timely submissions. This required a detailed project plan and meticulous surveillance. Scenario 4: Label Changes Require Survey Modification In this scenario, the initial REMS program includes a Communications Plan and a standard assessment timetable of 18 months, three years, and seven years from REMS approval. Over time, the label was changed several times to reflect new indications. Thus, the REMS was modified to accommodate updated risk messages and materials, and another assessment interval was added. The sponsor recognized that the label changes, REMS modifications, and original survey design created lowerthan-expected results in the early surveys. Consequently, the sponsor redesigned the survey to focus only on measuring healthcare providers’

© ParagonRx. All Rights Reserved. October 2014

4

paragonrx.com knowledge and understanding of the risk messages (versus collecting extraneous data which was not relevant to the goals of the REMS) and bucketed those survey questions into reportable domains. The sponsor also pretested the survey questionnaire prior to implementing it in the newly added assessment interval. Similar to Scenario 1, although the company was not able to submit direct question-to-question comparisons of performance among assessment intervals, it was possible to compare qualitative results with the use of reporting domains. It is also likely that the focused questionnaire and use of pretesting supported improved performance scores.

adverse event for example—should not be part of the survey even if they are of importance to the product’s safety profile.

»

Structure the survey using the FDA-preferred format of multiple choice questions, geared toward yes/no, true/false answers. The options should include foils (incorrect answers) as well as an option for an “I don’t know” response.

»

Pretest the survey questions prior to implementation. This will identify language or structural issues that interfere with the clarity of the question’s intent.

»

Design the questionnaire to reflect the latest trends and feedback issued by the FDA, and be prepared to revise the survey quickly as needed after the FDA’s review to allow as much time as possible for survey fielding.

»

Provide more than one mode of survey administration, such as web and phone-based surveys to enhance participation rates. The phone-based survey should be facilitated by a trained interviewer.

»

Offer participants incentives for completing surveys and send reminder letters to nonresponders, both as ways to bolster recruitment.

Best Practices

»

We are hopeful that the forthcoming final guidance from the FDA will give sponsors much needed direction on how best to assess the effectiveness of REMS programs. In addition, we offer the following suggestions specific to knowledge and understanding surveys gathered from our work with multiple sponsors across a variety of different REMS programs in different situations. We recommend that to assess REMS properly through surveys, sponsors:

Link each survey question to a specific reporting domain or category of questions for scoring purposes. In recent assessments, the FDA has asked that mean scores be given for each key risk domain.

»

Carefully manage the timelines for collecting and reporting data. The FDA must see a copy of the protocol prior to survey implementation and is required to give sponsors feedback within 90 days. And, assessment surveys must remain open until there are 59 or fewer calendar days before the reporting date. All totaled, sponsors should allow seven months before the assessment due date to prepare, field, analyze, and submit their report of results.

Scenario 5: FDA Proactively Recommends Survey Postponement A newly launched hospital product was approved with a REMS that includes a Communications Plan and an ETASU, the combination of which is unusual. Assessments are to be reported at six months and 12 months post approval and annually thereafter, which is a common timetable for REMS with ETASU. Because the product was experiencing slow market uptake, the FDA, on its own initiative, recommended that the sponsor delay the first knowledge and understanding surveys until year two. This allowed time for greater market penetration and program enrollment, ultimately giving the sponsor the ability to survey a meaningful sample of physicians and patients

»

Remember that the goal of the surveys in this application is to assess the performance of a REMS program; it is not a traditional market research exercise. Therefore, the scope of the survey should be limited to questions relating to key risk messages. Exclude questions that are not important to the objective of the assessment. Other areas of enquiry—around every possible

» Putting Risk to Work

© ParagonRx. All Rights Reserved. October 2014

5

paragonrx.com Conclusion Clearly, the process for assessing REMS has not been entirely effective to date, nor has it been in any way standardized. However, REMS assessments have been the subject of much recent scrutiny, and refinements have been proposed. We fully expect the FDA to issue more comprehensive guidance that will take the onus off of sponsors for developing their own methodologies—in particular those related to knowledge and understanding surveys. Regardless of the timing, nature, and specifics of that guidance, there are several steps that sponsors can follow to improve their experience with measuring REMS and to increase the validity of their results. These have been tested in the field and are now considered best practices.

» Putting Risk to Work

© ParagonRx. All Rights Reserved. October 2014

6

paragonrx.com References ISPE Response to FDA Questions, Standardization and Evaluation of Risk Evaluation and Mitigtion Strategies (REMS) – Public Meeting on July 25-26, 2013. March 5, 2014.

» Putting Risk to Work

© ParagonRx. All Rights Reserved. October 2014

7

Suggest Documents