Submitted by Jeffrey Heilbraun on October 8, 2013
This week, I will be hosting a webinar focused on considerations and best practices for Blood Pressure (BP) monitoring as part of the cardiac safety assessment for compounds in development. The evaluation of BP responses to drugs being developed for non-cardiovascular indications is garnering increased public awareness and regulatory focus, evidenced by formal scientific discussions at prominent meetings and recent publications by the Cardiac Safety Research Consortium (CSRC) on this topic.
When designing a study aimed at measuring the “off-target” BP effect of a compound, there are a number of factors to keep in mind. Here are 3 key considerations for accurately defining the off-target BP effect and maximizing the potential of your blood pressure cardiac safety study.
- Do changes in blood pressure relate to compound concentration?
It is important to determine whether an off-target BP signal is associated with increasing drug concentrations or if it is independent of drug concentration. Determining whether the observed change in BP (as well as other safety and efficacy measures) shows dose dependency with your compound provides valuable clinical information, including delineating a specific concentration threshold that is associated with changes in BP. Evaluating the concentration effect on BP signal within a SAD or MAD study in the early phase development of a compound can be beneficial prior to moving into a phase II, patient-based population
- Blood pressure signals for short and long term exposure to a therapeutic compound.
Establishing a comprehensive BP profile of a compound provides valuable information from a clinical management and regulatory perspective. When evaluating a study compound, it is important to determine if the BP response reaches a plateau or continues to increase as a function of extended exposure. Furthermore, from a safety perspective it may be important to understand what occurs to the BP upon cessation of drug treatment (i.e. does the BP return to baseline or to a clinically appropriate threshold?).Data regarding the fluctuations of BP signals is important as it informs the sponsor whether changes in BP are resolvable or if additional intervention (in the form of secondary medications) is required to safely use the compound. This can become an important consideration based on the therapeutic indication and whether or not the medication is taken on a long term basis, short term basis or intermittently when disease symptoms are present.
- What are the available options for evaluating the blood pressure changes?
When it comes to monitoring BP, there are several options for the sponsor to consider based on a number of parameters including clinical study phase, patient population, therapeutic indication and an understanding of BP response profiles obtained during early compound development. Advances in BP technology enable efficient capture of BP measurements in both the clinic and office setting as well as directly from a patient’s home. Pilot studies of BP measurements are often used as a guide for selecting an approach.Common BP measurement and monitoring options include:
- 24-hour Ambulatory Blood Pressure Monitoring. ABPM is a key diagnostic technology, providing surrogate endpoint data describing BP changes over 24 hour time periods. Constant monitoring enables evaluation across circadian rhythms, providing a more comprehensive analysis of BP changes. ABPM data are now included in a number of regulatory submissions for novel drugs and are increasingly considered for determining new drug safety during regulatory review.
- Automated office blood pressure monitoring. The benefits of using automated BP monitoring include calibration and standardization of equipment across study sites and removal of variability resulting from human/auscultatory bias.
- Remote home blood pressure monitoring. Telemonitoring provides a means of electronically transferring study participant, self-monitored BP data to a central repository, reducing the number of patient visits and increasing patient compliance for a study. This methodology is valuable for providing real time visibility into blood pressure trends during the conduct of the study.
Be sure to join me on Oct 10th to further explore best practices for BP monitoring in clinical trials and gain perspective from a pharmaceutical sponsor who has been directly involved in addressing and navigating approaches to defining an off-target BP signal for a developing compound.
Link to original press release: http://www.bioclinica.com/blog/blood-pressure-endpoints-clinical-trials-are-you-monitoring
Submitted by Colin Miller on September 9, 2013
This week the annual Society for Clinical Data Management (SCDM) conference, the world’s largest educational event for clinical data managers and related professionals, will be held in Chicago. I have been invited to give a talk aimed at clinical data managers, highlighting some of the complexities and challenges associated with trials containing imaging endpoints.
While preparing for my presentation and thinking about the challenges that are faced by data managers, I put together the following checklist of what I consider best practices for managing and processing imaging data in clinical trials.
1. Understand the Imaging Review Charter
Clinical trials with imaging endpoints require an Imaging Review Charter (IRC). The IRC serves as a roadmap for standardizing and interpreting data coming from trials containing imaging endpoints, providing a comprehensive and detailed description of the clinical trial imaging methodology. It is important to ensure that the Charter and export specifications document match with respect to the primary endpoints. Although additional data or assessments not described in the Charter may be exported, the key assessments have to match the content of the Charter. An understanding of the IRC will help ensure that a data manager is in tune with the overall flow of data for the trial and is up to speed with all imaging data that will fall under their supervision.
2. Understand your imaging data
Data managers should be familiar with the imaging endpoint(s) being measured in a given study. Different endpoints provide different data outputs. Some data are quantitative at the time of acquisition (e.g. PET and DXA scans) while other data are derived from image measurements (e.g. lesion area or volume). Another type of data output are scoring systems, which are commonly used in many therapeutic areas (Genant for osteoporosis or Sharp modified for RA) and provide semi-quantitative data. Familiarizing yourself with the measurements feeding into an eCRF is crucial to understanding and validating data and will facilitate the development of appropriate edit checks for a study.
3. Develop the edit checks being applied to your imaging data early in the process
Developing optimum edit checks for each imaging endpoint is important to ensure high quality data. Different imaging endpoints will require different edit checks due to the inherent variability of different modalities and measurements. Longitudinal studies (e.g. lesion tracking for oncology studies) provide multiple measurements and track differences over time, making it necessary to understand the extent of variability which can be tolerated in a given measurement. Imaging core labs are often tasked with performing edit checks, so it is critical for the data manager to understand these edit checks and the rationale behind choosing them.
4. Understand the read design
The read design for a clinical trial ultimately dictates the imaging data that will pass through the hands of the data manager. Different clinical trials utilize different reader paradigms, from a relatively straight-forward single reader to more complex paired reads with adjudication. The choice of read paradigm is based on a number of factors including study phase, regulatory compliance, operational efficiency, and cost-benefit. By understanding the selected reader paradigm, data managers can understand the flow of data in a trial and can anticipate the amount of data he/she will be handling throughout the course of the trial.
5. Visit the core lab
Although this may sound obvious, I encourage all data managers working on an outsourced clinical trial to establish a relationship with the vendor of your study. For clinical trials in which an imaging core lab is utilized for centralized image analysis, it is important to be involved in communications with the core lab from the start of the trial. Visiting the core lab and participating in conference calls between the sponsor and core lab are good ways to ensure open lines of communication during the course of the trial.
With medical imaging playing a prominent role in today’s clinical trials, data managers must be aware of the challenges associated with managing complex imaging data. The SCDM is an important conference and I’m looking forward to sharing my thoughts on this topic at the meeting. I hope to see you there!
Link to original post: http://www.bioclinica.com/blog/5-things-every-data-manager-should-know-when-it-comes-medical-images-clinical-trials
Recently I wrote a blog about the steps required to successfully select a clinical systems vendor that will truly deliver against your needs. To review: write a business case, collect detailed requirements, write a clear RFI, quickly narrow down your list of candidates, host some dog & pony shows, and finally make a decision. Whew! That’s a lot, and each step is a lot more difficult and involved than it sounds. But that’s the case with just about everything in life really, so I’m sure that being the smart and determined blog reader/system selector that you are, you managed to get through them all. Nice job! By now however you probably realized that systems selection is just the tip of the iceberg. Now that your company has dropped a good chunk of change because you wrote that nice business case explaining why buying the system was such a great idea, you have to make sure that people actually, you know, use it. Clinical systems implementation can be a tricky business. All the business cases in the world won’t help if the system doesn’t get rolled out properly, and even the most perfect of systems won’t be of any benefit if implemented in a vacuum.
Spare Some Change (Management)
Any time there are changes, there are going to be problems. People don’t like change. The lack of a change management strategy is one of the biggest reasons these shiny new systems fail to deliver on what they promise. In short, management needs to communicate the changes well before they start happening so people can prepare themselves. Really what needs to happen is essentially a marketing campaign – first teasing the upcoming changes (and the new system), then some ‘benefits management’ (letting people know what’s in it for them…you still have that business case, right? Time for a little copy/paste…), setting expectations for future shifts in the way things work, a big launch that’s given some degree of fanfare, and a follow up campaign to reinforce the new order of things (and to remind people how great everything is working out). All of this needs to be scaled to match with the impact that the new system will have on the organization – be careful not to overdo (or underdo) it!
While this is covered to a degree in the change management piece, it goes a lot deeper than that. First, you need to find the system a home – who is going to ‘own’ it? You also need to make sure that every layer of the organization that will be affected by the new system is addressed in a more ‘personal’ manner. Have conversations with individuals to get a better understanding of their concerns and perhaps more importantly get an idea of their expectations of the new system. Hopefully you’ve already addressed everything way back when you collected requirements, but things change and people come up with new ideas as they see for themselves more of what the system can do. Regardless, you need to make sure that what the people need (a good set of reports, for example) will be available from the system at launch, or you’re setting yourself up to fail.
At a higher level, some organizational changes may be necessary to get everything out of your new system that you want. For example, before the new system you may have had people doing manual data entry as half of their jobs. But the new system uses fully automated data feeds – what are you going to do with all of that free time now? Or the opposite might be true, where you need some kind of system administrator or business analyst permanently attached to the new system. Make sure you plan ahead!
Process is Paramount
A new system will almost always require a new set of processes. After all, the reason you’re getting a new system is that it’s better (and therefore different) than your old system. But it’s not just the actual interaction with the new system that will need to be looked at. This is a great time to step back and evaluate your overall processes in the area that the new system is a part of (and you’re already of a continuous improvement mindset anyway, right?). This is a great way to “market” your new system – because let’s face it, the old system wasn’t that bad. Usually the problems are at least 50% process oriented, if not more. By revamping your processes you can not only make them more efficient but they can be designed in such a way as to dovetail nicely with the new system. When you are issuing your communications throughout the system selection/implementation project, this becomes a much more powerful message than merely touting a new system. “Hey, we’re overhauling everything so your lives are about to get way better!”
Testing Users’ Patience
There’s nothing worse than rolling out a system that people find to be buggy and unstable. Even if the issues are minor, a bad first experience can leave a bad taste in people’s mouths for a long time (and perhaps forever). Doing thorough system testing (especially if your system has been configured or customized) is critical. User acceptance testing will also help to get a greater number of future users involved to not only spot technical issues but to identify general deficiencies in the system before it gets rolled out to the population at large. Testing is painful but necessary if you are to release a quality “product” to your organization.
Systematic System Training
Even the most simple of systems will require some training, and there’s nothing better than some hands-on sessions where users get their hands dirty. Just keep the extent of the training in line with the complexity of the system – don’t overcomplicate things. Further, training should be focused for each job role as there’s no reason that a person in one job needs to know every little thing a person in a completely different job does in the system (other than at the highest of levels). A training plan should be developed well before system roll out so everyone knows what to expect and blocks off time for training on their calendars. But training doesn’t end with the actual training – users will also need access to detailed documentation for refreshers, and creating other aids such as “cheat sheets” and FAQs can go a long way in making sure people can easily put your new system to good use.
By now you’re probably regretting purchasing this new system. Whose idea was this anyway? Take a deep breath and fear not, for with some planning and oversight (and a good fully dedicated project manager) your system can experience a smooth roll out and significant user adoption. Which means you did such a good job with this one that you’ll be the one put in charge of the next one…way to go!
Link to original post: http://www.pharmicaconsulting.com/clinical-systems-implementation-five-things-you-need-to-do/
If you have decided to establish a CTMS system in your company you might think about the nessecity of having multiple installations or instances of this system.
More than one CTMS installation? Why that?
Of course you will ever work with one instance of your company’s CTMS to have one data basis to do reporting and get specific data. But there are other demands towards the implementation process of the CTMS that form the need to have more than one installation. Our proposal is to have three to four instances which does not mean that you will need four times the hardware – thanks to virtualization technologies.
Installation No 1: Productive Environment
This installation is the one productive environment. This system is where all your staff works in and enters real data. This system’s database is the source for all reporting.
Typically, this installation runs on the most powerful hardware of all four. Sometimes this system is even driven by a load balancing server to provide even more performance.
Installation No 2: Test Environment
Never stop a running (CTMS) system! Every change, every update, every bugfix has to be checked before it is implemented in the productive system (1) to avoid any problems. Therefore you should implement a dedicated test environment which is a copy of your productive environment.
Since this implementation is not very likely to be used by many users, it might be sufficient to run it on a virtal server.
Installation No 3: Training Environment
Having a technical solution is one part of the story, having your staff trained to use the system in the right manner is the other part. Independently whether you decide to have classical classroom trainings, e-learnings or a blended learning scenario you will have to have a installation of your CTMS which is espescially installed for training demands.
This system has to be well prepared to provide different training scenarios.
Installation No 4: Validation Environment
Most of the CTMS installations are nowadays validated which means that they comply with the regulations of 21 CFR part 11. To pass the necessary validation process it is a must to have a dedicated validation system to seperate the entries being made while working on test scripts from the productive trial data.
Of course it is possible that you have special conditions which make even more installations necessary, but the four mentioned CTMS installations are a well-founded approach for a professional implementation of your Pharma-IT landscape.