“The one real goal of education is to leave a person asking questions.”
– Max Beerbohm, essayist of the late Victorian and Edwardian eras
Medical school is challenging. At first glance, the sheer volume of information can seem overwhelming, as if the only way to process it all is to simply memorize as much as possible. So critical thinking can seem an extraneous part of the process, something that risks crowding out otherwise needed brain space.
And yet were this true, the quality of health care would stagnate.
What we know about diseases, therapies and overall patient care has been shaped by a culture of constantly re-evaluating the status quo. Research itself is a perfect example of this ideology; it is a process by which new answers are constantly found through continued questioning. And while we have made great strides to arrive at the standards today, it is necessary to continue challenging our own beliefs.
When I came to revisit Johns Hopkins for the medical school’s second look weekend, I listened to many faculty members speak on a number of topics about our responsibilities as future physicians. One of the most striking came from Glenn Treisman, director of the AIDS Psychiatry Service, who told us, “Fifty percent of what I teach you is going to be wrong, but I don’t know which 50 percent.”
That comment struck me. I had not even begun medical school, yet I was already being primed to question information that was given to me.
The first two years of medical school are largely classroom-based, and during that time, it takes enough energy to simply memorize the standards of care. Once we leave the classroom and enter the hospital, however, that’s when a new type of learning begins: the process of challenging what we learned.
The teaching on the wards is very Socratic: We learn by answering questions. My most salient learning moments come when attending physicians first ask questions like, “What is the cutoff for starting this therapy?” Then, after we answer the value we have memorized, the physician follows up with, “Is there evidence for that?”
Here is where the attending takes teaching to the next level. If there is evidence, we learn about the research behind the standard we just cited. If there is no evidence, we learn how that standard got adopted.
Before I began my basic internal medicine clerkship, we were given a talk about the standard cutoff of hemoglobin A1c (HbA1c) in diagnosing type 2 diabetes (DM II). We had learned through our reading that a level of 6.5 or above was diagnostic of DM II. But in this lecture, we were presented with the evidence that led to this cutoff — or rather, the limitation of the evidence. This was eye-opening. Here was a national standard of care, and yet we were being challenged to understand why it came to be.
Last month, I listened to an outstanding conference presented by Thomas Finucane, professor of medicine at Johns Hopkins Bayview Medical Center, who addressed the data behind glucose control DM II. The disease is diagnosed by abnormally high and uncontrolled blood glucose levels, and the diagnosis comes with an increased risk of morbidity and mortality. It stands to reason that controlling blood glucose would improve morbidity and mortality risk, and it is this reasoning that likely led to our current care goals of lowering glucose and HbA1c. And yet in this presentation, I saw that of all the DM II medications, only one, metformin, has consistent evidence showing improved outcomes. While the remaining medications may lower glucose and HbA1c levels, they do not appear to similarly lower morbidity and mortality.
This is why we need data. And this is why we need to continue training young physicians to question. Medicine is not simply a game of memorizing. Critical thinking is, well, critical.
What a wonderful post! I'm glad that you've enjoyed this process of discovery - it's incredibly rewarding, and keeps me engaged a quarter century after completing medical school. Thank you.
Comments are closed.