Sylvia Syvenky went for a routine dental appointment in early October, expecting to have two caps on her teeth replaced. But something went terribly wrong. “I felt like I was choking,” Mrs. Syvenky said. “I couldn’t take a breath. All sorts of gurgly sounds were coming out of me.” She was rushed by ambulance to University Hospital near her home in Edmonton, Alberta, where doctors placed a mask on her face and forced air into her lungs. They told her she had heart failure. After her condition improved, they asked her to sign up for a study of a new drug to help with breathing. Mrs. Syvenky is like many with heart failure who arrive at hospitals, unable to breathe. Yet she is the last person who would normally be asked to join a research study. At age 70, she was much older than typical study participants and her symptoms were too complex. But now there is a growing movement to gather a new kind of evidence, the kind that will fill some of the biggest gaps in medical science: What treatment is best for typical patients with complex symptoms, like Mrs. Syvenky? Many are elderly with several chronic conditions and taking several unrelated medications. And what are the long-term effects of a treatment — death rates, side effects, progression of the disease? A group of advocates, including medical researchers, medical societies and insurers, is lobbying Congress to pay for an Institute for Comparative Effectiveness Research that would assess treatments and identify gaps in evidence. When there are gaps, the institute would initiate what are being called “real world,” or “pragmatic,” clinical research trials to gather the evidence. Some leading researchers who used to defend the status quo say they have switched. “There has been a 90-degree turn” in thinking, said Dr. Eugene Braunwald, an eminent cardiologist at Harvard Medical School. “I personally have swung around.” Although thousands of medical studies are completed every year, most have relatively limited goals. They often carefully select patients who have few medical problems other than the one under study, making it easier to get one clear result. They may not look at effects over the long term, assuming that if a treatment helps initially, patients will be better off. But while such studies can help a drug acquire approval or answer a restricted research question, they can leave patients and doctors in a lurch because they may not tell how the new drug or treatment will work once it is tried in real patients with complex problems. Such limited studies, while they can have value, may no longer be enough, particularly when care has become so expensive and real evidence more crucial. “They are at the heart of why we have trouble making decisions,” said Dr. Scott Ramsey, a professor of medicine at the University of Washington. It is an issue that arises again and again. For example, it is one reason for the debate over the popular diabetes drug Avandia, or rosiglitazone. When the drug was tested, the main question was whether it lowered blood sugar, which it did. Then, after it was on the market, some researchers found hints of increased risks for heart attacks, the major killer in diabetes. But there was no way to know for sure from the studies that led to the drug’s approval. At the same time, a move to conduct many more pragmatic trials would involve nothing less than a rethinking of how medical research is financed and managed. “There’s this gulf between what questions researchers have found interesting to study and what questions industry and the N.I.H. have chosen to fund and what users of information most want to know,” said Dr. Sean Tunis, director of the Center for Medical Technology Policy, a nonprofit group that studies ways to get better medical evidence. “One starts from the head and the other starts from the tail and they don’t meet in the middle.” Dr. Robert Califf, a cardiology professor at Duke University School of Medicine and principal investigator in the heart failure study, cites the study Mrs. Syvenky entered as a model of what is so urgently needed in medicine. The study, the largest ever in heart failure, is 15 times larger than any previous study of nesiritide. Unlike those that led to the drug’s approval, it is enrolling patients like those doctors see every day. Anyone showing up at one of 450 medical centers around the world, unable to breathe because of heart failure, is eligible. Participants are randomly assigned to get an infusion of nesiritide or a placebo, a saltwater infusion. And the study, comparing the treatments, asks two simple questions: Are patients still alive a month later? And were they readmitted to the hospital? Dr. Califf knows the evidence problem all too well. He spent years working on committees that formulate medical guidelines for treating heart disease patients. And over and over again, he says, he and other committee members ran into a problem. The studies did not ask whether one treatment was better than another and they did not ask what happened over long periods in typical patients with their complicated medical problems. “We looked at the A.C.C. and A.H.A. guidelines,” Dr. Califf said, referring to the American College of Cardiology and the American Heart Association. “Fifteen percent of the guidelines were based on good clinical evidence. And cardiology is where we have the most evidence.” He added that he was not indicting studies that looked at a more limited group of patients and often studied a drug’s effects for a shorter time. “You have to figure out the right dose. Is there a chance it could work?” Dr. Califf said. But something more is needed. The Food and Drug Administration does not have a hard and fast rule about what it takes to show that a drug is effective, said Dr. Robert Temple, director for medical policy at the F.D.A.’s Center for Drug Evaluation and Research. A lot depends on what is known about the drug’s short-term effects and how well they predict long-term outcomes. But, he added, there are practical concerns with large pragmatic trials because companies have to look at a wide range of possible effects when they test a drug. “If you do a large outcome study in 10,000 people in the same way you do short-term studies, you’ll never finish,” Dr. Temple said. “There’s no white hat, black hat here,” said Dr. Kevin Weiss, president and chief executive of the American Board of Medical Examiners. “Pharmaceutical companies are trying to do what they are supposed to do. The F.D.A. is trying to do what it is supposed to do. But they are not fully connected to what the public needs.” That was part of the problem with nesiritide. At first, all was well. The drug dilates blood vessels, making it easier for the heart to pump blood into the rest of the body. Patients breathed better. The F.D.A. approved the drug in 2001 based on studies that asked about breathing in the first few hours and excluded patients with symptoms as complex as Mrs. Syvenky’s, even though she is typical of half of all people with heart failure. The patients in the original studies, mostly white men, had an average age of 60. Yet more than 800,000 Americans aged 65 and older were hospitalized for heart failure in 2006, the most recent year for which statistics are available. In 2005, questions arose. Researchers lumped together data from several nesiritide studies. One analysis reported damage to kidney functions and the other found increased death rates. Sales plummeted. But no single study was large enough to determine if those risks were real, and merging smaller studies in a so-called meta-analysis can be misleading. In fact, said Dr. Adrian Hernandez, a cardiologist at Duke University, meta-analyses have been a risky business. When their conclusions were tested in subsequent studies, they have been correct just 60 percent of the time. They are good for generating hypotheses or perhaps when clinical trials are impractical. But as evidence? They are about as accurate as tossing a coin. With fears about the drug growing, Johnson & Johnson, the drug’s maker, asked Dr. Braunwald to put together an expert panel to advise it. The questions about nesiritide were so pressing, Dr. Braunwald’s panel concluded, that the drug should be given to only the sickest patients in a hospital setting. In the meantime, the company needed to conduct a large pragmatic trial looking at clinical outcomes in typical patients. “The data on which the drug was approved were very sketchy,” Dr. Braunwald said in a recent interview. “And since the question had been raised by these two meta-analyses, which in themselves were controversial, the idea of a pragmatic, outcomes-based clinical trial was very natural.” Dr. Steven Goodman, an oncologist and biostatistician at Johns Hopkins University School of Medicine, wants to insert a reality check on large pragmatic clinical trials. “When they are first described, they sound wonderful,” he said. But, he added, there’s a rub. “You often give up the ‘why.’ ” Pragmatic trials, he explains, are most feasible when they are as simple as possible, measuring important outcomes like hospitalizations or deaths but not things like how much medication is taken, how well a procedure is performed or how accurately an X-ray is read. An operation, for example, may not work well in the real world because it takes more skill and training than is typically found outside a few medical centers. A pragmatic trial will show the surgery is not working but not why. Scientists, Dr. Goodman added, do not like giving up on the why. And that leads to a question of who is going to pay for these studies. Medicare pays for medical care but does not sponsor studies. Insurance companies, said Dr. Goodman, who helps review evidence for Blue Cross Blue Shield, may be seen as having a conflict if they sponsor studies because they may have to pay for treatments that are shown to be effective. Drug companies sometimes do pragmatic studies, said Alan Goldhammer, the vice president for regulatory affairs at Pharma, a trade group for drug companies. But usually that is when “there are issues relating to the drug and the ability to affect drug and marketplace.” At the National Institutes of Health, said Dr. Elizabeth Nabel, director of the National Heart, Lung and Blood Institute, “many of us would love to do many more of these studies.” But, she added, “we have a limited budget and there is only so much that we can do.” The nesiritide study was a direct result of Dr. Braunwald’s panel’s recommendation. Johnson & Johnson is paying for it. But the study’s overall conduct, design and analysis are coordinated at Duke University through an academic consortium and led by an independent academic executive and steering committee. When the study began, some heart specialists said it could never enroll enough patients. Who would agree to be randomly assigned to a placebo or a drug to ease breathing? So far, however, recruitment is ahead of schedule, Dr. Hernandez said, which he attributes to the researchers’ enthusiasm. And, he adds, there are already more patients from North America in this study than in any acute heart failure study ever done.
Save to Delicious with other 10 happy readers
Home
India
USA
Europe
Regulatory
Hot Affairs
Services
Resources
Jobs
Contact
|
2012 BioPharmaLife |
Post a Comment