Tuesday, February 22, 2011
If the so-called thought leaders and powers that be in such organizations as AMA and ACP and others supported ( continue to support?) P4P because of a belief that patient care would improve they should now take a strong stand against such programs. P4P does not work.
Implicit in P4P program is the concept of target goals. Goodhart's Law stands the test of time and logic. When a measure ( as in a purported measure of "quality") becomes a target it looses its value as a measure.
Monday, February 21, 2011
He suggests that this dishonest behavior ties in nicely with the New Medical Ethics as promulgated by the American college of Physicians in which the notion of Social Justice is elevated to a prominent position.
Well, one way or another someone is lying. If those claiming to be physicians are really not, they are obviously lying and if they are physicians they are lying about the purported sick time.
Dr. Scott Silverstein, writing on the blog "Health Care Renewal" discusses the sick-note incident in terms of the slippery-slope situation that arises when there is "physician dishonesty-on-an-agenda " which he describes as the face of postmodern medicine.
Wednesday, February 16, 2011
Dr. Perednia provides a brilliant and detailed description of what is wrong with the current system/non system and then offers his proposal to remedy the mess, a proposal very similar to that offered by Dr. Richard Fogoros in his book, "Fixing American Healthcare".
Wait- why read about plans to overhaul American healthcare, don't we already have a solution in the form of ACA? If you want to read a brief explanation of why ACA is not the answer, go and read Dr. John Goodman's latest comments on the incredible absurdity that Congress put together.
AND congratulation to DrRich At Covert Rationing Blog for his Weblog award for the category of Health Policy and Ethics.
Friday, February 11, 2011
When I read about that finding my first thought was to look more closely at the guidelines and importantly what was the evidence underlying the recommendation. As has happened more than once, Dr.RW saved me the trouble. See here.
Dr RW's analysis suggests that the evidentiary basis of the recommendation of the IDSA and does not belong on the top of the classical,mythical evidence based medicine (EBM) evidence hierarchy in which randomized clinical trials and meta-analyses perch at the top.
My take on this article is that we might be cautious in accepting the findings on face value. After all this was a retrospective observational study replete with all the potential biases this type study might possess. This is what I call coarse grain data without the fine grain detail that might be provided by detailed patient level analysis. For example, the authors speculated that perhaps the side effect of the double gram negative antibiotic combination may have contributed to the increased mortality in the group treated in accord with the guidelines. Maybe so, but more detailed analysis might provide support or refute that speculation.
Wednesday, February 09, 2011
One of the first things that comes to mind is exercised associated hyponatremia (EAH).
EAH has attracted much attention in recent years. Dr. Tim Noakes,from Cape Town ,South Africa, attributes the apparent increase incidence of the condition to overemphasis of encouraging runners to drink liquids past the point of reasonable and safe short term replacement needs. Subsequently more physiologically reasonable recommendations regarding drinking during longer races have been issued. The New York Marathon's fluid replacement advice was 8 ounces every 20 minutes. The International Marathon Medical Directors Association (IMMA) recommended 400-800 ml per hour. Too often in the past the advice seemed to be drink as much as possible.This advice seemingly lead to some slower runners ingesting so much liquid that they actually gained weight during the event.
Acute EAH has been associated with cerebral edema and non-cardiac pulmonary edema. With acute lowering of the serum sodium and less than instant re-equilibration of cerebral intracellular solutes, water moves into brain cells. If untreated in severe forms, cerebral herniation can occur with brainstem compression. Judicious amounts of three percent saline I.V. has become the consensus treatment.
Here is an earlier blog entry on putative mechanisms in EAH.
An elite runner collapsed and died early on in the marathon trials in New York and at least early reports indicated no specific cause was determined. Exercise associated hyponatremia was not a likely cause in this case.See here for comments regarding causes of sudden death in athletes.
Marathons in hot weather can be a disaster ( the Houston weather was merely warm and humid) which is how some reporters described the ill fated 2007 Chicago Marathon. See here.
Tuesday, February 08, 2011
The guidelines apparently were a joint effort of the cardiologists and practically everybody else who had interest in the diagnosis,medical or surgical or catheter treatment of vascular construction to the brain.
The paper is gives a wealth of information and references and could easily take up many hours of study. Here is one snippet-
It is reasonable to prefer endarterectomy (CEA) over stenting in asymptomatic patients with greater than 70% stenosis.The panel had grade A evidence for that recommendation.
Although they do not recommend screening for carotid obstruction in asymptomatic patients, many folks will be getting ultrasound exams of their necks,abdomens and doppler exams for vascular disease of the lower extremities as roaming, proprietary groups are frequenting churches and other sites.So when your patient for whom you did not recommended screening shows up with a report suggesting significant blockage you have a good resource to consult.
Thursday, February 03, 2011
In the 1 Feb 2011 issue of the Annals of Internal Medicine in the Clinical Guideline section, ACP's Clinical Guideline Committee authored an article entitled :
High-Value, Cost Conscious Health Care: Concepts for Clinicians to Evaluate the Benefits,Harms,, and Costs of Medical Intervention". see here for full text.
Dr. Douglass K. Owens, author of numerous cost effectiveness studies, was the lead author.
The article begins with expression of the customary alarm about increasing health care costs and the need for cost control, an effort the authors believe should focus on the value of the health care interventions.
Their operational definition of value is " an assessment of the benefit of an intervention relative to expenditures".Value is determined by balancing benefit and costs.
This is consistent with Harvard Business School professor, M.E. Porter's definition which is:
Simple enough we just figure out the benefits and the cost and ...but the devil is in the details as always.
The Annals authors then make what they believe to be critical distinction -the distinction between cost and value. A high cost item may or may not provide high value and low cost may have little benefit , therefore that intervention is of low value. So what we want is high-value health care.
(As best I can tell,the busswordification of" high-value health care" can be attributed at least in part to the efforts of Porter and Dr. Elizabeth Teisberg, although I don't wish to slight Dr. Don Berwick and physicians at the ACP.Whatever it origins and vectors of spread, medical authors and policy wonks talk about it now as if everyone knows what it is.)
The authors then redefine rationing (or in the authors words " more appropriately" define) to mean "restricting the use of effective, high-value care". So that if an intervention that is "determined" to be low value is restricted that would not be by the new definition considered rationing. This should provide comfort to those who worry about the rationing of health care. eliminating an intervention that is determined (By whom?) to be of low value is not rationing at all. One can see what power this puts in the hands of those determining what is high and low value.
The authors then discuss the importance of considering the downstream costs and benefits of an intervention.For example, one has to factor in the cost of maintaining a ICD not just the initial cost of assessment and placement of the device.
If a treatment is both better and cheaper than an alternative there is no problem in deciding between the two. More complexity emerges when an alternative provides more benefits but also costs more.
In this situation we are told we need comparative effectiveness analysis which is basically cost benefit analysis (CAB) that compares the various alternative interventions. Conceding this point, at least for the sake of argument, one now asks who will make that analysis
Owens et al provide the answer:
...we recommend assessing their value [competing interventions] to patients and society by using cost effectiveness analysis. Such analysis require specialized expertise and training,are often expensive, and thus are typically performed by investigators.
Note this type of assessment cannot be done by just anybody, only those with specialized expertise and note what they claim to provide-assessment of value not only to patients but to society.
Realizing that some may find that level of hubris unsettling, the real money quote of the article is :
"The choice of a cost effectiveness threshold is itself a value judgment and depends on several factors, including who the decision maker is.
That is the heart of the matter, after all of the gathering of various costs and developing estimates of the quality adjusted life years (QALY) and the aggregation of costs and aggregation of estimated benefits and using various analytic tools ( e.g. cost-effectiveness ratios), someone or some committee has to make a value judgment. Is the benefit worth the cost or not? At the end, it is a human value judgment- not the solving of some equation. Then the question is who will decide.
In the same issue of the Annals of Internal Medicine there is an Editorial by Michael Gusmano and Daniel Callahan of the Hasting Center offering cautionary counterpoints.
They emphasize Owen and co-authors' admission that effectiveness evidence is lacking and our ability to assess quality of life is inadequate. If the evidence is lacking and our ability to assess quality of life is inadequate even investigators with expertise and special training might be challenged. Gusmano and Callahan continue:
Perhaps the biggest problem with cost-utility analysis in that the expenditures on health care cannot be compared with other societal needs..the failure to consider opportunity costs may eliminate existing,but un-assessed health care technologies and services that are a better value than the "cost effective" technology included in these assessments.
More issues are raised:how far downstream should costs and benefits be assessed,what social discount rate should be chosen,should there be accounting for costs of unrelated illnesses for those added life years. Further, the outcomes of the assessments can be very dependent on the various technical particulars of the process, most of which, for most readers, exist behind the curtain.
My take is that the Owens article is old wine in bottles with little if any design changes. In his 1992 JAMA series Dr. David Eddy suggested that the quality of health care could be increased while decreasing costs if we only applied cost effectiveness techniques and that would lead to the greatest good for the greatest number . This seems little different from eliminating only low value care the determination of which can only be made by experts with extra training. I commented on those series of JAMA article here.
Finally, a hopefully not needed clarification. Sarcasm aside, I actually favor comparative effectiveness research (CER). It has been going on for years.It is important to know if the outcomes of intervention X gives are better than intervention Y.We have known how to do that reasonably well for sometime. However, when intervention X is better than Y but costs more than Y ,having someone presume to determine than X is a better or lesser value for "patients and society" is another matter.
posted by james gaulte @ 12:57 PM
Wednesday, February 02, 2011
John Goodman asks the question "Does measuring quality actually decrease quality?". See here for his recent blog entry.
Charles Goodhart, a British economist put it this way in 1975:
Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes
In other words, a measurement when used as a target looses its value as a measure.
This basic notion was expressed about the same time by a sociologist, Donald Campbell, who said :
"The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor."
A poster child for this phenomenon in the context of quality measures in medicine is the absurd 4-hour pneumonia rule.I have blogged about that before.
When the incentive for ED staff was to get the antibiotics to pneumonia patients within 4 hours, because that was established as a quality measure, distortion and corruption emerged in the form of giving less prompt attention to non-pneumonia suspects and treating folks who really didn't have pneumonia with antibiotics.
From Goodman's post:
Quality measures also degrade quality by distorting behavior.
Dr. Douglas Perednia had a great discussion of this topic here.