?Comparative effectiveness? studies, which compare one treatment for a particular illness against another to determine which works better, have received a lot of attention and billions of dollars in federal support in the last few years. But when I mentioned comparative effectiveness research recently to a colleague who I know is particularly interested in treatments and the clinical trials behind them, he let out a loud snort and guffaw before I even finished saying the words.
?It?s a great idea, but it?s not real life,? he said, regaining his composure. ?Or at least not the real life of a lot of doctors and patients.?
To explain, he described a newly published study on treating children who are thought to have swallowed a small foreign object, like a coin or a toy. He reeled off a long list of laboratory tests, scans, scopes and X-rays that the researchers recommended for such cases, adding, ?Those experts assume that everyone lives near big medical centers like theirs, but not all of my patients do. And what are we going to do if the insurance company doesn?t approve of all the tests we order??
It took only a cursory review of comparative effectiveness research over the last decade for me to realize that my colleague was right. Despite many useful and even potentially cost-saving findings, many of them failed to change doctors? practice and patient care.
Now a group of researchers has offered a cogent analysis in the journal Health Affairs to explain this failure. And they have done so using methods about as different from comparative effectiveness as research can be.
The researchers talked to more than 50 doctors, patient advocates and other health care experts, each of whom had created, conducted or evaluated comparative effectiveness research or helped introduce the findings into clinical practice. During the interviews, they referred to trials of blood pressure medications, spine surgery, antipsychotic drugs, a heart rhythm device, heart catheterizations and bone marrow transplantation, and then asked why some of these studies seemed to inspire enduring changes while others did not.
A handful of factors came up again and again. Those interviewed frequently referred to the fact that many of these studies did not address the actual needs of practicing clinicians and patients. For instance, one study of medications for treating psychosis focused on the differences in efficacy among the drugs, but mental health care providers really wanted to know about differences in safety.
Sometimes, too, a study?s conclusions required such a significant shift in thinking that doctors and patients had difficulty adjusting to the change, like the recent recommendations against using measures of the prostate-specific antigen, or PSA, as a screening test for cancer. Other times, the findings were so nuanced or ambiguous, with such complicated restrictions on what worked best when, that they simply were not incorporated into professional guidelines or recommendations.
But perhaps the most common reason for these studies? failures came down to dollars. In the current health care system, clinicians are rewarded for doing and ordering more. Pharmaceutical and medical device firms reap fortunes from physicians? orders, and a single change could cost them billions. Studies that endorse anything less than another expensive procedure or a newer and more expensive medication or the latest device are often destined for failure or a protracted struggle against drug and device companies that are willing to put up a costly fight.
?The incentives are all out of whack,? said Justin W. Timbie, the lead author and a health policy researcher at the RAND Corporation in Arlington, Va. ?The current system favors treatments that are well paid, not necessarily those that are most effective.?
For example, one study found that generic diuretic pills that cost pennies a day worked better for patients with high blood pressure than newer drugs that could be as much as 20 times as expensive. Because hypertension affects tens of millions of Americans, this finding had the potential to save the health care system billions of dollars.
But the finding never really took hold; the percentage of patients taking the cheaper diuretics barely increased. Physicians had a difficult time changing their prescribing habits; limited funding prevented researchers from widely disseminating the results; and pharmaceutical companies waged an aggressive marketing campaign that included paying health care experts to speak about the study in a way that made their expensive drugs seem better.
Based on their findings from these interviews, Dr. Timbie and his fellow investigators offer several suggestions that may improve the impact of these studies. These include realigning financial incentives to support recommended changes in practice; incorporating a broad range of perspectives, like those of practicing doctors and patients, in the design, goals and interpretation of such studies; and, above all, proceeding with a clear strategy for all future comparative effectiveness research.
Despite the challenges, the researchers remain optimistic about the future. And for good reason. Their study was initiated by policy makers and financed by the federal office responsible for health care policy coordination and planning. And representatives from the new national organization, the Patient-Centered Outcomes Research Institute, whose mission is to develop and oversee such studies, have reviewed and discussed the suggestions with Dr. Timbie and the other authors of the study.
?The track record to now has not been great, but for the first time, comparative effectiveness research is a priority for the country,? Dr. Timbie said. ?The whole process of generating new evidence has a degree of governance that has never existed before.?
?It?s all about impact now,? he added.
Source: http://well.blogs.nytimes.com/2012/11/08/why-studies-that-compare-treatments-lack-impact/
christie brinkley seattle mariners geraldo rivera supreme court health care joe oliver joba chamberlain new york mega millions
No comments:
Post a Comment