The Institute for Safe Medication Practices in its January 2015 report talks about several aspects of drug manufacturers’ data on side effects being of poor quality, lacking information etc. It also talks about the quality of the data in the context of what’s actually being reported in terms of what types of cases get submitted. The conclusion on questionable submission of cases is valid. Let’s consider an example. There is a requirement for MAHs to provide causality assessments on solicited cases. Given that in most situations there is very little evidence for causal relationship, it should be standard practice for most MAHs to call those cases not related. However, inspection experience has taught us that many regulators tend to take a much more conservative approach. They say that you can’t completely rule out the presence of a causal relationship and therefore MAHs end up calling these cases related and reporting them. Moreover, although in the last 10 years there is a better regulatory alignment between what should be treated a spontaneous and what can be considered solicited, we’re not quite there yet. For e.g. – If a company has a patient support program setup, is every AE that gets reported through the program going to be considered as solicited? The company may have provided customers with a toll-free number to call into but it didn’t necessarily solicit their call to report an AE. However by giving people the vehicle, more often than not it gets labeled as solicited, thus needing the conservative approach to causality assessments and reporting. There is a real lack of consistency in terms of veracity of reporting ADRs and the ISMP report highlights some of the same points.
With respect to quality of ICSR submissions, from an FDA perspective it’s quite difficult to gauge. The FDA doesn’t provide any structured metrics back to companies that are submitting cases so you don’t really get a sense of whether the data are really meeting their expectations on a case to case basis. From the way the FDA collects data, it would look quite different for reports that are submitted to the IND vs the first market reports that get filed. However, for everybody operating in the U.S., we are well aware that individual reviewers will provide feedback on the quality of individual cases and tend not to focus on internal consistency from one case to the next but more on missing data or the interpretation of the data within the case. For e.g., does the narrative really describe the essence of what happened to that patient? Reviewers will pick up the phone and call somebody back if they feel that the case has missed the point that was the subject of the initial report.
In the EU, it’s a little different. There are several agencies, e.g. in the U.K., France and Germany that do a periodic check of the samples of the data that have been submitted to them for a given MAH. They provide those metrics with data on the number of errors that inform companies if there are database convention issues or individual errors on cases. But it’s a really good way of telling companies if they are adhering to the expectations of the agency. They tend to primarily focus on the ICH E2B guidance and how they expect data to be structured within a case. If companies do not comply they typically get picked up on these outputs that the agencies are sending out.
Since agencies are doing some of these reviews, at least in Europe, companies should use the data that agencies provide to drive their internal quality checks. However, bear in mind the agencies do not perform exhaustive quality checks and are certainly not looking at 80+ fields. Companies can take a much more pragmatic approach along the same lines since agencies will not be looking at more than 30 fields when reviewing individual cases.
Companies should also build as many E2B checks into their database as they can. These are a huge help because issues in the quality of a case can be flagged before getting to the point of submission. They allow companies to make corrections up front rather than rely completely on manual reviews or fixing errors after the case has been rejected by an agency. It needs to be noted that sometimes this impacts the performance of the database.
If there is a concern about building too many edit checks within the database, there are proprietary tools than sit on top of the database and don’t impact its performance. They can point out issues like missing data and also inconsistencies between fields which should be completely aligned within the case. There are ways to automate the process and although none of the current tools are perfect, they are better than relying on manual review all the time.
[To learn more about the importance of measuring ICSR quality, you can listen to our recent PV Webinar conducted in association with DIA and presented by David Balderson – Vice President of Global Safety, Sciformix Corporation. David spoke in depth on a new approach to ensuring quality safety submissions the first time. The webinar also featured a very informative expert panel discussion with panelists Calin Lungu, CEO – Drug Development Consulting Services S.A. (DDCS) and Mangesh Kulkarni, Head Safety Operations – Novartis Healthcare Pvt. Ltd.]