Researchers Fight Over Chantix and Heart Risks

Researchers Fight Over Chantix and Heart Risks

May 14th, 2012 // 1:08 pm @

A spat has broken out among researchers over the extent to which the controversial Chantix quit-smoking pill is linked to cardiovascular risks. A meta-analysis that was published last week in BMJ found the drug, which is sold by Pfizer, does not increase the risk of heart attacks and strokes. The results contrast with a meta-analysis published last year that maintained Chantix does increase heart risks, a conclusion the authors of the latest study labeled “misleading.”

The dispute centers on varying approaches to pursuing a meta-analysis which, of course, combines the results of several studies to test a hypothesis. Beyond disagreements over methodology, however, the clash is certain to extend the wider safety debate over Chantix, which has also been blamed for suicidal and hostile behaviors, although the FDA maintains that studies show the benefits outweigh the risks (see here and here).

Safety concerns, you may recall, have plagued Chantix almost since the pill was approved in 2006 and, consequently, have frustrated Pfizer, which had high hopes the drug would generate impressive sales. Instead, a stream of media stories about psychiatric and cardiovascular side effects have dampened expectations. In the first quarter of this year, Chantix generated just $178 million, a 10.5 percent drop from a year earlier (see page 36 here).

The cardiovascular concerns were highlighted last year when the FDA added a warning on the product labeling about an association with a small, but increased risk of cardiovascular adverse events in patients with cardiovascular disease (read here). But the issue accelerated when a meta-analysis published in the Canadian Medical Association Journal found Chantix was associated with a 72 percent increased risk of serious adverse cardiovascular risks in smokers without a history of heart disease (look here).

The results of that study, which examined 8,216 patients in 14 trials, is what prompted researchers at the University of California, San Francisco, to conduct their own meta-analysis. In their view, the numbers simply did not add up. And when they finished reviewing 22 trials with 9,232 patients, they found a difference in risk of serious cardiovascular events was 0.27 percent between those on Chantix or a placebo, which was not clinically or statistically significant (here is the BMJ study).

“What caught my eye was that, in their very first study, they found one (CV) event in 400 people and the placebo was 0 out of 200,” says lead author Judith Prochaska, an associate psychiatry professor at the University of California, San Francisco, and a researcher with the Center for Tobacco Control Research and Education. She notes she has also won an “investigator-initiated research award” from Pfizer that has been applied to this Chantix trial.

“There were twice as many subjects on Chantix and there was a difference of only one event. The calculated statistics for the study was 4.5, or 4.5 times greater risk and that’s alarming. But the statistic they were using was unstable and inflates findings under certain conditions. And there were eight studies (in the CMAJ analysis) that were like that,” she says, adding that there were other methodological issues that account for the different findings.

For instance, the CMAJ analysis excluded eight trials with nearly 1,600 tobacco users who were randomized to Chantix or placebo that did not have any serious cardiovascular events. And the CMAJ analysis examined events that occurred during the complete length of the Chantix trials, some of which lasted up to a year. Prochaska examined events only up to 30 days after a patient stopped using the pill in the belief the drug remains in the body for seven days after usage ends. And 13 of the 14 studies in the CMAJ analysis experienced greater attrition in the placebo group than in the test group, which she says could inflate the treatment effect.

The lead author of the CMAJ meta-analysis, however, contends the latest meta-analysis is off base. “I think there should be scientific debate about the best methodology,”Sonal Singh, an assistant professor of medicine at Johns Hopkings University, tells us. “I don’t think their study is misleading. There are just two different approaches to looking at the data. But I’m disappointed in the tone. But they needed media attention” (here is the UCSF press release that uses the word ‘misleading’ to describe his meta-analysis).

Singh noted that his meta-analysis examined patients through the duration of the Chantix trials because what remains unknown is the the length of time that a heart risk may appear after treatment ends. “We were learning from the Vioxx issues. With Vioxx, heart risks didn’t climb until long after people were taken off the drug. She assumes the potential CV risk is due to direct effect of the drug being in the body, so when the drug is out the risk should go away. I’m making a very different assumption. I don’t know how the drug is increasing heart risk, so I’ll count data through the end of a study and some lasted up to a year,” he says. “Which world do you live in? She’s assuming she knows how the drug causes CV risks and I’m assuming I don’t know. And those are the kinds of studies that are needed.”

In a response to BMJ, Singh and a colleague who worked on the CMAJ meta-analysis, Yoon Loke of University of East Anglia in the UK, write the journal that the review conducted by the UCSF researchers “has limitations in data, analysis and interpretation which raises doubts about the veracity of their conclusions.” And they cites these points…

Prochuska and her colleague excluded a number of cardiovascular adverse events from the Chantix arms of the clinical trials that have been reported. He also says they analyzed data by treatment level, which would allow exclusion of events occurring in randomized patients. “In contrast, we adhered to the intention-to-treat (ITT) analysis in accordance with FDA regulations and established and generally accepted scientific principles,” they write.

From there, they say the UCSF researchers “recommend risk difference as the most appropriate model.” However, he maintains “most regulatory meta-analysis of safety risks, including our meta-analysis, are conducted on the relative scale.” And they claims the models used by Prochuska “are statistically underpowered at low event rates, and bias their estimates towards the null” hypothesis. Finally, they say there was a failure to provide info on the optimal information size or the power of the meta-analysis (here is his letter to BMJ).

“There is a simple explanation why this study could not detect a difference in cardiovascular risk. Because of a weak design it was unable to detect any effect on anything,” Thomas Moore, a senior scientist with the Institute for Safe Medication Practices and who serves as a consulting expert in the civil litigation regarding Chantix, writes us in an e-mail. “It would be a serious scientific error to make a safety claim based on a study that did not disprove the null hypothesis.” He also co-authored a study with Singh that concluded Chantix is not suitable for first-line use (see this).
Source

As for Prochaska, she dismisses the criticism as well as concerns that her findings were colored by her support from Pfizer. She maintains that she did began the meta-analysis before starting the research sponsored by Pfizer, which takes place in a hospital setting, because she did not want to put patients at risk. “It’s unfortunate that anyone would throw away all the math we’ve done” for that reason, she says.

“This is the first time I’ve ever taken Pfizer funds. And this study is an idea I put forward. I have five other grants funded by NIH, the state of California. I don’t feel any need that I have to take money from pharm. There are a lot of top scientists who do partnerships with pharma. It’s not always sinister. And Pfizer had no oversight of this. There was no prior publication review and it was a completely independent analysis. It comes down to the math. We used the same method as Singh to identify studies and code events, and this shows in a very straightforward way why the statistics they chose were inappropriate.”


Subscribe Now

Featured Partner