We can learn a lot from evidence-based medicine. Scientific studies have the potential to improve patients’ outcomes but they can also push the clinician into an informational abyss. There are several possible pitfalls with studying scientific data: 1) the data can take preference over the patient’s clinical outcome; 2) the information can be valid but not pertinent to the patient’s situation 3) the data may become important for its own sake thus inspiring cheating or fudging of information; and 4) the information itself may be unreliable. Here are a few examples from my own professional career.
Preference for clean data is more important than patient outcome
When I was a medical student we had a patient on the surgical unit who was dying of an advanced cancer. Due to cancer-related liver and renal failure he had anasarca, a condition in which patients can put on 20–50 pounds of water weight outside of their vascular system. It was apparent that the man was close to death. The residents following the case instructed C., a fellow medical student, to draw the patient’s bloods in order to monitor his electrolytes. My colleague went to the man’s bedside, saw that he was close to death, and realized that with his profound edema she would probably have to stick him multiple times. She asked herself, “Is this really helping the patient?” She then decided to draw the blood from a friend of hers and sent it to the lab in lieu of taking the patient’s blood. One hour later the patient expired. When the results came back from the lab, the residents were quite excited that they had managed to correct this man’s electrolyte status, even though they had lost the patient. Of course, you could argue the ethics of my colleague’s actions for a few millennia. But at least she was thinking of the patient as a person rather than a set of numbers. The idea of the patient as an entity, not just a set of labs is a good one and has now been embraced by end-of-life care communities. Thankfully, many patients at this juncture in their lives are nowadays in hospice care and not being managed by a team of over- eager residents or renegade medical students.
This same thought process is evident when patients complain to me: “I have reduced my calories and fat and I am exercising five times a week, but my weight hasn’t changed.” My aim as a doctor is not to impress your scale. My aim is to get you eating right and exercising. If you manage to incorporate healthy eating and exercise into your life, you have already won the battle. Many people’s scales show wonderful results when they go on fad diets and sit in a sauna for an hour without rehydrating themselves but neither of these are advisable nor particularly healthful. This is when data gets in the way of logical thinking.
Info is valid but not pertinent
Over the years, I have been to at least three grand rounds in which the presenters showed data proving that if we were to remove every middle-aged woman’s ovaries who comes in for any GYN surgery, then this would reduce death from ovarian cancer by tens of thousands. I do not doubt their findings. Ovarian cancer is a serious disease that will afflict 1 out of 70 women. It is hard to detect at an early stage. This data, however, has no meaning unless you compare the lives of women who have had their ovaries removed with those who have not. As it turns out, when these studies were finally done, women who still had their ovaries lived longer, had less heart disease, less osteoporosis, and, in general, a better quality of life. In light of these newer studies (many of which are still in progress) we need to give women who are considering GYN surgery the whole story and let them decide which risks they are willing to take.
Clinical performance based on data inspires cheating
As a medical student working in the emergency room, we worked in teams: blue, green and red. The red team was overseen by Dr. M, who was a famous surgeon and the chief of emergency medical services. Like many successful surgeons of that era he did not suffer fools gladly. Dr. M rounded on his patients every morning at 6:30 am. At 5:30 am the residents and medical students spent an hour putting the red labels on patients who seemed to be improving and assigning the less-well patients to the blue and green teams. In this way, Dr. M’s team always outperformed the rest.
This same type of thinking creeps into all types of medical practices when performance is judged on limited data. Early in the 1980s, in-vitro fertilization specialists discovered that they had less success with women over the age of 40. Since their success was rated solely on full-term pregnancies, these physicians stopped women over the age of 39 from receiving their services. Once it was discovered that these women’s pregnancy rates improved from oocyte donations (the borrowing of a younger woman’s eggs), they were once again put into the statistical data. Previous to this, if you compared a reproductive specialist who treated women with an average age of 34 to one whose patients averaged age 39, the doctor with the younger set of patients always outperformed the one with the older patient base. In the same way, when looking at success rates for cardiac surgeons it is important to consider the surgeon’s patient base. If we do not look at the full picture, physicians being judged on just one parameter of success will merely drop high-risk patients from their purview. These patients, who may be older or have underlying medical conditions, such as diabetes or hypertension, will have to forego cardiac surgery because their outcomes are less reliable and thus their statistics worse. No one wants to have bad numbers!
Data is unreliable
During my residency, patients in active labor were placed on the Labor and Delivery Unit and those with complications of pregnancy or who were postpartum were placed in the Maternity Unit. One Sunday, I was called by a nurse to come up to the Maternity Unit and evaluate a patient. The patient was full-term and had been admitted to the hospital with high blood pressure. The patient thought she was in labor. The nurse stated that she had placed the patient on a monitor for an hour and the patient was not having contractions. On my way from the elevator to the patient’s room, I could hear somebody breathing in a way that only a laboring patient breathes. When I got to the patient’s room the baseline for her monitor was set at a hundred instead of zero. The monitor only registers readings from zero to a hundred. With a baseline of a hundred, nothing - even the most gigantic contraction - will not show up. On exam, the patient was ready to deliver. Two things are evident from this story. One is that your data is only as good as the collector of said data. The other is that you should never leave your common sense at home. A patient who says that she is in labor, breathes like she is in labor, and sounds like she is in labor should be taken seriously until proved otherwise by a competent clinician.
This reliance on hospital-collected data is how several radiology departments unwittingly over-radiated their patients. A recent New York Times exposé revealed that radiology technologists at many hospitals across the United Sates did not fully understand the data from their own scanning machines and the data was less than helpful because it did not contain an alarm or visual alert when the patients had received high doses of radiation. Once again, a clinician is only as good as the data that she collects.
I don’t want to give anyone the impression that medical studies are useless and medical technology is a waste of time. On the contrary, clinicians have immeasurably improved the lives of patients by using evidence-based data and modern technological tools. However, when applying these, some caveats apply. We physicians should always be skeptical about the information we receive and should always ask the following questions. Are the studies we ask patients to get and the procedures we put them through going to change how we manage their cases and ultimately affect their outcomes? Can we trust the information we get from the laboratories and radiologists we use? When reading studies, does the information gleaned from these studies give us the whole picture or does it just look at survival, for instance, and not evaluate quality of life? When looking at success rates of physicians or surgical techniques, are we comparing apples to oranges? And when something looks too good to be true, we need to wonder if the numbers have been manipulated to the benefit of the author at the cost of accuracy.
These days, many people receive medical information from the internet. This information may be about drugs, drug studies, physicians, or surgical procedures. We must all be vigilant that we understand how this data is collected, who is interpreting it and if it is even pertinent. It is easy to take a bad turn on the information highway.
This is such a good and subtle point. Numbers tell us so much, but it's also important to listen to other less quantifiable senses.
Posted by: Caroline Hagood | 09/20/2010 at 09:49 PM
Well said!
Posted by: Khebert.blogspot.com | 09/26/2010 at 09:17 AM
Thanks for your comments. I enjoyed reading your blog,as well and your pictures are great. My daughter is 24 and ironically also has had problems with hypoglycemia. At one point she fell down the stairs and then hit her head on a glass desk. It's much better now than it was when she was younger. Good luck!
Posted by: Dr. Judith Weinstock | 09/26/2010 at 05:02 PM