David Balderson, Vice President of Global Safety Operations

What is ICSR Quality and How Is It Measured?

David Balderson

In my experience of being in the operational side of Safety for the last 16 years, I have learned that the perception of what ICSR (Individual Case Safety Reports) quality means often varies between the company’s perspective as an MAH (Marketing Authorization Holder), what the regulators perceive as quality and more so with external stakeholders. From the perspective of external stakeholders, which comprise of third parties who perform independent review of the data often with a view to generating publications, the focus is usually on data completeness. On the MAH side of the spectrum, the focus tends to be more on data consistency and making sure that you’ve accurately represented the data you’ve received and it’s internally consistent to the case. Regulators usually sit somewhere in between.

Completeness of data certainly varies from one case to the next. You are always going to expect better quality information from a clinical trial case than from market research or social media. By and large the quality of data in the latter is going to be poor. This is where managing expectations in terms of what is realistic to collect in the first place and what we need to report out becomes imperative. Clearly, every company has the responsibility to ensure consistency of data within a case. We need to make sure that we accurately represent the data we’ve received and fields within a case are internally consistent.

The Institute for Safe Medication Practices in its January 2015 report said, “Drug manufacturers’ data on side effects are of poor quality, lacking information.” Only 49% of the reports submitted met the ‘basic standard’ of reasonably complete’ with respect to age, gender and event date. However, the report fails to acknowledge the impact of sources such as literature articles and whether certain data is withheld for data privacy reasons.

To ensure quality of ICSRs, at a minimum, companies need to create checklists that define fields to be assessed and break them down by criticality, with appropriate targets.

  • The typical approach that most companies take is around the critical fields where you don’t want to see many errors. You’ll probably want to target no more than a 1% error rate, for e.g. event terms/coding, product, seriousness, causality, receipt date etc. There are not going to be too many fields in this group, somewhere between 5-7 fields at most.
  • The next is the largest group of fields which would be major fields. These can include pretty much everything else in the case. For e.g. patient demographics, reporter details, product details, event details, lab data, consistency between narrative and fields. This is where it comes down to taking a pragmatic approach, because you could in theory look at every other field in the database. But really, you should be trying to get down to 20-25 fields at most and focusing on what’s really important from the case perspective.
  • The final category is focused around the narrative themselves. When we talk about minor errors in the narrative, we are not talking about misrepresentation of data but grammar and typos.

I favor taking the field-based approach to quality data. You derive the metric from the number of errors you have made, out of the fields you’re looking at. So if you’re looking at 5 critical fields then that would be your error opportunity. If you find one error, 20% (1/5) would be your error rate. But using field based data is a much better way to get to the root of where the issues really are. If you only roll up your quality metrics to an overall case level and say, ‘We’re trying to aim for 90% case level accuracy,’ it doesn’t really tell you anything about where you’re making the errors. Are you making them in critical fields or are they typo/grammar type of errors? In a field based approach, you can still produce case level data, but it’s not particularly useful in isolation without having field level data as well.

Be careful when assessing overall case quality. Even though you might have an overall target like 90% for error free cases, it doesn’t mean every error is causing your case to fail. Certainly some critical fields should result in the failure of a case but the other fields like typos in the narrative certainly don’t need to be failing a case. It’s up to you to decide a reasonable measure of overall case quality, but avoid setting unrealistic targets.

[To learn more about the importance of measuring ICSR quality, you can listen to our recent PV Webinar conducted in association with DIA and presented by David Balderson – Vice President of Global Safety, Sciformix Corporation. David spoke in depth on a new approach to ensuring quality safety submissions the first time. The webinar also featured a very informative expert panel discussion with panelists Calin Lungu, CEO – Drug Development Consulting Services S.A. (DDCS) and Mangesh Kulkarni, Head Safety Operations – Novartis Healthcare Pvt. Ltd.]