Monday, November 17, 2014

Should resident promotion decisions be based on a written exam?

A few days ago, some surgeons on Twitter discussed the role of the American Board of Surgery In-Training Examination, a test which is given every year in January.

The test was designed to assess residents' knowledge and give them an idea of where their studying should be focused. However, many general surgery program directors (PDs) use the test results in other ways. Some impose remediation programs on residents with low scores and even base resident promotion or retention on them. Some even demand that all residents in their programs maintain scores above the 50th percentile.

The Residency Review Committee (RRC) for Surgery frowns upon these practices and states in its program requirements (Section V.A.2.e) that residents' knowledge should be monitored "by use of a formal exam such as the American Board of Surgery In Training Examination (ABSITE) or other cognitive exams. Test results should not be the sole criterion of resident knowledge, and should not be used as the sole criterion for promotion to a subsequent PG [postgraduate year] level."

The problem for program directors is that the RRC also mandates (Section V.C.2.c) that "as one measure of evaluating program effectiveness" 65% of a residency program's graduates must pass both the American Board of Surgery's Qualifying Examination (written) and Certifying Examination (oral) on their first attempts. I have said before that the "65% on the first attempt rule" does not seem evidence-based.

Wednesday, November 12, 2014

Can cholecystectomies safely be done at night?

A new study from surgeons at UCLA found that laparoscopic cholecystectomies done at night for acute cholecystitis have a significantly higher rate of conversion to open than those done during daylight hours.

Nighttime cholecystectomies were converted 11% of the time vs. only 6% for daytime operations, p = 0.008, but there was no difference in the rates of complications or hospital lengths of stay.

The study, published online in the American Journal of Surgery, was a retrospective review of 1140 acute cholecystitis patients, 223 of whom underwent surgery at night.

The authors advocate delaying surgery until it can be done in the daytime, but this conclusion needs to be examined.

Although the percentage of gangrenous gallbladders was similar in both groups, it wasn't clear from the data how many patients were semi-elective and how many were true emergencies.

Operative procedure durations were 110.5 minutes for nighttime and 92.4 minutes for daytime cases, and 1.5 and 2.0 days elapsed respectively before the patients were taken to the operating room, both p < 0.0001. The hospital lengths of stay were similar at 3.7 days for the night group and 3.8 days for the day patients. The causes for these lengthy operations, delays in operating, and long hospital stays were not explained in the manuscript.

Wednesday, November 5, 2014

Proctoring, supervising, and coaching

Any surgeon who acts as a proctor for another surgeon or supervises residents or mid-level providers should be aware of the potential legal pitfalls.

An informative discussion of proctoring and supervision called "Is There a Proctor in the House?" appeared in 2012 on a website called Law Journal Newsletters.

Proctoring has always been an issue. For many years, surgeons have been assigned to proctor newly appointed staff in order to confirm that they were properly trained. Proctoring has been extended to those learning new techniques in minimally invasive and robotic surgery.

The usual scenario is that a proctor is assigned by a hospital's department chair or credentials committee with the expectation that the proctor will observe and report on the new individual's skills.

According to the article, "a surgical proctor who acts only as an observer should not have any medical malpractice liability if a procedure is performed below the standard of care." This holds true as long as the proctor has no physician-patient relationship and does not participate in any medical decision-making or scrub in on the procedure.

Thursday, October 30, 2014

How to rank surgical residency programs

In September, Doximity, a closed online community of over 300,000 physicians, released its ratings of residency programs in nearly every specialty. Many, including me, took issue with the methodology. Emergency medicine societies met with Doximity's co-founder over the issue and echoed some of the comments I had made about the lack of objectivity and emphasis on reputation.

I wonder if it is even possible to develop a set of valid criteria to rate residency programs. Every one I can think of is open to question. Let's take a look at some of them.

Reputation is an unavoidable component in any rating system. Unfortunately, it is rarely based on personal knowledge of any program because there is no way for anyone not directly involved with a program to assess its quality. Reputation is built on history, but all programs have turnover of chairs and faculty. Just as in sports, maintaining a dynasty over many years can sometimes be difficult. Deciding how much weight should be given to reputation is also problematic.

The schools that residents come from might be indicative of a program's quality, but university-based residencies tend to attract applicants from better medical schools. The other issue is who is to say which schools are the best?

Faculty and resident research is easy to measure but may be irrelevant when trying to answer the question of which programs produce the best clinical surgeons. Since professors tend to move from place to place, the current faculty may not be around for the entire 5 years of a surgery resident's training.

The number of residents who obtain subspecialty fellowships and where those fellowships are might be worthwhile, but would penalize programs that attract candidates who may be exceptional but are happy to become mere general surgeons.

Resident case loads including volume and breadth of experience would be very useful. However, these numbers have to be self-reported by programs. Self-reported data are often unreliable. Here are some examples why.

For several years, M.D. Anderson has been number one on the list of cancer hospitals as compiled by US News. It turns out that for 7 of those years, the hospital was counting all patients who were admitted through its emergency department as transfers and therefore not included in mortality figures. This resulted in the exclusion of 40% of M.D. Anderson's admissions, many of whom were likely the sickest patients.

The number and types of cases done by residents in a program have always been self-reported. The Residency Review Committee for Surgery and The American Board of Surgery have no way of independently verifying the number of cases done by residents, the level of resident participation in any specific case, or whether the minimum numbers for certain complex cases have truly been met.

So where does that leave us?

I'm not sure. I am interested in hearing what you have to say about how residency programs can be ranked.

Friday, October 24, 2014

Please stop this: "There are more ___ than Ebola victims in the US"

I get it. Can we please stop comparing the number of Ebola victims in the United States to all sorts of irrelevant things? PS: It's not that funny either.

The following are directly copied from recent tweets. Links have been removed for your protection.

There are more Saudi Princes than Ebola victims

Kim Kardashian has had more husbands than Ebola victims in the US

More Americans have been dumped by Taylor Swift than have died from Ebola

Fun Fact: More #kids die annually due to #faith healing than #Ebola.

FACT: Katie Price has claimed more victims than Ebola.

NYC traffic. another thing that's much more dangerous than #Ebola, courtesy of @bobkolker via @intelligencer

There are more people in this tram than ebola victims in America.

I've lost more followers than US Ebola victims [I didn't tweet this or any of these other tweets.]

@lbftaylor fewer #ebola victims in US than drunk Palins in a #PalinBrawl.

@pbolt @robertjbennett Also, there are more ex-wives of Larry King than there are ebola victims int he US.

Rush Limbaugh has more ex-wives than USA has Ebola victims!

@xeni Menudo has had more members than 3x the number of American Ebola victims...

Put #ebola in the context of vaccination preventable dz: 118,000 children < 5 yrs old die from measles per year

@Tiffuhkneexoxo @LeeTRBL more dc team quarterbacks have played this year than there are US ebola victims

Rest assured, there will always be more American guns in Africa than Ebola victims. Everything is fine. Relax

As #Enterovirus spreads faster x country & kills more than #Ebola, sure victims' parents must b sad congress isn't demanding an ED68 czar.

We are all far more likely 2 be victims of identity theft than #Ebola. Obama has a plan to fix that

Americans spend more money on Halloween costumes for their pets than the UN spends on helping Ebola victims and fighting ISIS combined.

@mikebarnicle 9900 gunshot victims since Newtown, much scarier than Ebola.

So FYI... More people die from the #flu than #ebola .

Fear hospital infections not Ebola. 1 in 25 patients are infected. 75,000 die yearly.

Every day in America around 100 people lose their lives to mostly preventable car crashes. #Ebola

There are more experts on CNN right now talking about Ebola in America than people with ebola in America.

Wednesday, October 22, 2014

1 in 20 Americans are misdiagnosed every year


A paper published in April found that about 12 million Americans, or 5% of adults in this country, are being misdiagnosed every year. This news exploded all over Twitter. Anxious reports from media outlets such as NBC News, CBS News, the Boston Globe, and others fanned the flames.

The paper involves a fair amount of extrapolation and estimation reminiscent of the "440,000 deaths per year caused by medical error" study from last year.

Data from the authors' prior published works involving 81,000 patients and 212,000 doctor visits yielded about 1600 records for analysis.

A misdiagnosis was determined by either an unplanned hospitalization (trigger 1) or a primary care physician revisit within 14 days of an index visit (trigger 2).

A quote from the paper [Emphasis added] : For trigger 1, 141 errors were found in 674 visits reviewed, yielding an error rate of 20.9%. Extrapolating to all 1086 trigger 1 visits yielded an estimate of 227.2 errors. For trigger 2, 36 errors were found in 669 visits reviewed, yielding an error rate of 5.4%. Extrapolating to all 14,777 trigger 2 visits yielded an estimate of 795.2 errors. Finally, for the control visits, 13 errors were found in 614 visits reviewed, yielding an error rate of 2.1%. Extrapolating to all 193,810 control visits yielded an estimate of 4,103.5 errors. Thus, we estimated that 5126 errors would have occurred across the three groups. We then divided this figure by the number of unique primary care patients in the initial cohort (81,483) and arrived at an estimated error rate of 6.29%. Because approximately 80.5% of US adults seek outpatient care annually, the same rate when applied to all US adults gives an estimate of 5.06%.

Thursday, October 16, 2014

Lactated ringers and hyperkalemia: A blog post meriting academic credit

In a recent post, I suggested that physicians should receive academic recognition for certain social media activities. "Myth-busting: Lactated Ringers is safe in hyperkalemia, and is superior to NS," written by Dr. Josh Farkas (@PulmCrit), is a great example of why that is true.

Using only about 1250 words and 6 references, he explains that infusing lactated ringers not only does not cause harm, it is actually superior to normal saline in patients with hyperkalemia, metabolic acidosis, and renal failure.

I highly recommend reading the post which should take you only a few minutes. If you're too lazy to do that, here's a summary.

Dr. Farkas found no evidence that lactated ringers cause or worsens hyperkalemia. In fact, he presents some solid evidence to the contrary.

If the serum potassium is 6 mEq/L, a liter of lactated ringers, which contains 4 mEq/L of potassium, will actually lower the potassium level.

Because almost all potassium (~98%) in the body is intracellular, the infusion of any fluid with a normal potassium content will result in prompt redistribution of potassium into the cells negating any of the almost negligible effect of the potassium infusion.

A normal saline infusion is acidic, resulting in potassium shifting out of cells and increasing the serum potassium level. Lactated ringers, containing the equivalent of 28 mEq/L of bicarbonate, does not cause acidosis.

There's a lot more in the post. Read it.

This issue is arguably the most misunderstood fluid and electrolyte concept in all of medicine.

In my opinion, the post should be displayed on the bulletin boards of intensive care units, emergency departments, and inpatient floors of every hospital in the world and should be read by every resident or attending physician who writes orders for IV fluids.

Disclosure: I've never been a fan of normal saline. Two years ago I wrote a post that discussed two papers showing that because of its negative effects on renal function, normal saline was inferior to lactated ringers in critically ill patients.