We are in the era of big data – collecting big data, structuring big data, analyzing big data. Big data has provided us with perspectives we could’ve never thought of having today and I am confident they all bring us to a much better and efficient place. Big data and general data though do not replace real-life experience. At least not in clinical research.
I remember a few years ago I was told a story about a clinical trial with patients who had arthritis. The data showed a huge drop-out rate and so the sites assumed patients didn’t see the value in participation.
It was indeed the opposite. Most patients had severe symptoms of the disease that caused them a lot of pain. Such a pain that there were some days when the patients absolutely could not function and could not enjoy the company of their relatives and close friends.
Some research nurses decided to visit patients who dropped out at their homes and were surprised at the reason behind their decision. Due to the severe pain, patients had to choose between going to the site and spending a few hours there or being with their grandchildren for a few hours and so they chose the latter.
How human, right?
And yet, data won’t always point to these human aspects, the black swans of our studies. This is why it’s so crucial we don’t forget about the real-life examples that might impact everything we are planning and thinking about.
But what do real-life insights mean in clinical research?
I recently posted an article about the Missing Questions in Site Questionnaires. There I emphasized how much we need to learn from the experience of study coordinators and how this will improve patient experience and our clinical trial outcomes.
Still, it’s not only study coordinators whose view points we don’t usually take into consideration. What about CRAs, enrollment assistants, investigators, project managers, and all people involved in the clinical trial in one way or another. We haven’t found a scalable way to collect their know-how and put it into action.
In the last few years at TrialHub, we are convinced that data should be combined with real-life insights in order to bring better accuracy to your plans for clinical trials and projections for patient recruitment. Here are a few examples that might convince you too:
- Enrollment benchmarking: Many platforms help you get estimates on how well a clinical trial performed and try to bring this data to the site level. If you’ve been working in the space for some time, you know that this might be quite misleading because the calculations are usually done by taking all number of sites in total considering they all performed the same. Even if they have not performed the same, the formula can not take into consideration when a site has been initiated and how long the recruitment actually took.
- Site’s experience: Whether you look at single investigator experience or the organization itself, their past experience is a must. Understanding how many clinical trials they participated in gives you a good idea of the track record they have. What this information doesn’t give you, is the quality of how well they performed. Being able to access real-life insights on their performance can make or break your future trial by bringing realistic expectations and the ability to choose the best ones among all with relevant experience.
- Submission processes and timelines: Official regulatory timelines are always tricky. Even the smallest details in a protocol can make a difference in how a local authority or ethic committee/IRB will accept a given document and what their expectations will be. A good recent example is the eConsenting process and televisit approvals in several European countries. In theory, televisits and eConsent are being largely adopted since the COVID-19 outbreak. At the same time, we know of a few protocols that received the official authority approval, but failed to convince the ethic committees to accept their plan for action with patients. Another good example is the new regulations around diagnostics and medical devices where you might feel lost in the “wild, wild West”. If you want to read more about some of the challenges in this space, here’s my article about running feasibility in the medical device and diagnostics domain: 4 Things That Make Medical Device & Diagnostic Feasibility Harder
How do you currently collect real-life insights?
Have you found the balance between big data and real experience know-how?
If you want to learn how we did it, feel free to contact the TrialHub team and get an overview of our best practices in the space.