New challenges of an aging epidemic

The AIDS activist Larry Kramer once said,

AIDS was allowed to happen. It is a plague that need not have happened. It is a plague that could have been contained from the very beginning.

The past 10 years we have witnessed innumerable incredible advances in HIV science and HIV treatment, but Mr. Kramer's words still ring true. We could have done more to stop it, and we still leave a lot of the work left undone.

This is particularly poignant since new challenges have now joined the old scourges of poverty and stigma and the wily habits of a historically pernicious virus. 

These new challenges include what one of my patients called the "Peter Pan Syndrome" and an uptick in injection drug use in many areas of the United States. The Peter Pan Syndrome is when patients told they did not have long to live at the beginning of the HIV epidemic now grapple with the accelerated effects of aging, a phase of life they never thought they would face. Additional challenges include complacency in youth who didn't grow up losing friends from HIV, and donor fatigue, and the short-term thinking of budget-conscious legislators who cut funding to HIV prevention programs that save lives.

These and other modern realities of the 2015 HIV epidemic are on full display in a new article in the Concord Monitor in which I was proud to be quoted, including:

The most pernicious myth in the HIV epidemic today is that people infected with HIV contracted the virus because they are somehow different – that in some way people with HIV deserved to get infected. This is hard-hearted and ill-informed, but I understand how the finger-pointing can be a defense mechanism against fear. The truth is, we are all vulnerable to this wily virus, and the only way we will win against HIV is to band together in compassion.

Twenty years from now will we look back and say we learned from our early mistakes, or will we rue the mistakes we made again and again?

The Placebo Gene

Scientists from Harvard Medical School recently published a persuasive summary of evidence behind a revolutionary idea: that the placebo effect has genetic origins.

Although their work is preliminary, it raises some fascinating questions.

Might people with a genetic predisposition to larger placebo effects skew the results of small clinical studies if they're unevenly allocated to one or another of the study groups?

Should clinical trials stratify their analyses by placebo genetics?

The most fascinating use of this observation, I think, could be the development of novel therapeutics that leverage the mechanisms underlying the placebo effect. Forget the underlying pathophysiology - perhaps we could just focus on wellbeing. 

Proud to be quoted in a nice story by Lisa Rappaport in Reuters. Here's another nice summary and the original article on which they were based.

Posted on April 16, 2015 .

Protecting patients from employee drug diversion

In May 2012, Exeter Hospital in Exeter, New Hampshire, announced it would temporarily close its cardiac catheterization lab after dozens of patients were diagnosed with acute hepatitis C infection.

In time, a multi-state investigation revealed that every case was linked back to a lab technician who was using patient drugs himself, and then putting contaminated vials back into use.

Further, there were multiple opportunities for the hospitals in which this technician worked to protect patients from him - but none were taken.

In a new article, ethicist Bill Nelson and I propose a nationwide reporting system that would help protect patients from the risks of drug diversion-related outbreaks like this one. 

Posted on March 13, 2015 .

Why it's getting harder to know if someone is dying

We tried our best, but CPR, an injection of epinephrine, and 360 joules of electricity all failed to restart Mrs. Melnyk’s heart. When everybody on the resuscitation team agreed that we could do no more, I said the words: 

“Time of death, 9:32.”

As we cleaned up, a young nurse began to tuck a clean white sheet around Mrs. Melnyk’s body—and then suddenly stopped. 

“Wait!” she shouted, pointing at the heart monitor. There on the screen, an electrical impulse registered and quickly disappeared, replaced by a flat green line. “It’s too soon to give up!” the nurse said. 

It turned out the young nurse had been fooled by a stray electrical discharge on an EKG machine. Together with her supervisor, we talked it through, and did not prolong the code blue.

That kind of confusion is getting more difficult to clear up, though. With newer technologies like PET scans and ECMO, the dividing line between life and death is getting harder to define. 

To read more, check out my new post in The Atlantic. 

So much has changed since the HIV test was first approved, 30 years ago today

Thirty years ago today, on March 2, 1985, the Food and Drug Administration approved a new HIV test. It was the result of nine months of round-the-clock labor by dozens of scientists. Immediately adopted by the American Red Cross and other institutions, the blood test marked the beginning of a new era in HIV medicine.

Since then, so much has changed. Check out my new post at The Conversation to learn more.

Posted on March 2, 2015 .

The missing microbial link

It turns out the oily wind that a subway train brings into a subway station carries its own passengers with it: bacteria DNA.

As passengers step into the car and compete for seats, they do so thickly coated with the DNA of millions of microscopic parasites. 

But do not be alarmed. Recent news stories have reassured us that while DNA for bubonic plague can be found in the nooks and crannies of New York City subways, we should not worry a cross-town ride risks the plague any more than asymptomatic patient Craig Spencer spread Ebola.

We should extend the same logic to hospital infection control.

Every few months there is a new article confirming that doctors, nurses and the things they wear and use are not sterile. NOOO! Stethoscopesties and our beloved white coats in particular have drawn ample attention for their unsurprising propensity to - gasp! - have bacteria on them. 

This is useful research, and hypothesis forming. Might doctors and nurses contribute to the epidemic of healthcare-associated infection (HAI) via fomites like these?

Absolutely, they could.

But our reactions to this interesting hypothesis have been far from scientific. We have gone from hypothesis to heartfelt belief in five seconds flat.

One second we know stethoscopes and ties and white coats have bacteria on them and the next we conclude that we should do away with the lot of them. Britain's NHS, for instance, famously adopted a "bare below the elbows" stance in 2008, and US infection control mavens have recommended a similar policy recently.

Where's the science? 

We all want to prevent HAI's, and lord knows the physician fashion index would rise by fully 2.5 points if we left our silly white coats at home (excellent logistical points made by an esteemed surgical blogger aside). But before we invest in nationwide changes to attire and clinical practice, we should convert these reasonable hypotheses into real evidence. 

If going bare below the elbows, or skipping neckties, or burning our white coats, really will protect patients, then we should be able to prove it.

Before we draw conclusions, let's do the damn experiment: compare infection rates in comparable patients cared for by providers who do or do not wear neckties, or who do or do not subscribe to a BBE policy, whatever. Bring on the data!

It's not that easy, you say?

I understand. Science is hard. 

If we can't show the intervention works, why invest in it? It is no great challenge to leave our ties at home, but until we base our recommendations in science the hard work of culture change may not be worth it. Why not just make the change? Well - how well is that argument working so far?

Let's be scientists people, lest we get schooled one day that these stories of stethoscope contamination are as alarming as bubonic plague DNA found in the New York City subway.

 

Posted on February 9, 2015 .

Was a study about bias - biased?

Sexism is a major problem in the sciences, technology, engineering and medicine, and the first step to fix this problem is awareness.

Yet a new study suggests awareness is exactly what many men lack.

Moss-Racusin et al write in Psychology of Women Quarterly that among 423 respondents in an online forum about articles on sexism in STEM fields, men were more likely to suggest sexism isn't a problem even when confronted with clear evidence to the contrary.

Let that sink in a little.

Men sure are clueless, aren't they. Let's get out the cattle prods. 

BZZT!

Or... maybe not.

A closer reading of the study reveals the way it was conducted could lead to bias. Not bias in the sense of being sexist, but bias in the statistical sense of the word, i.e. that the authors' conclusions may be invalid.

Here's why.

It turns out the authors collected 831 responses from an online forum regarding articles on sexism in STEM fields. We don't know who responded, and how they might differ from those who didn't. Were non-sexist men less likely to opine, leaving the sexist trolls to put the dark side of their humanity on full display? Probably - but we just can't really know because the information is unavailable. 

That's not all. From out of those 831 responses, the authors analyzed data only from the 423 in which the respondent's sex could be inferred from their response. If the respondent said, "As a man, I am aggrieved that women are scientists but there is no evidence of sexism," then their rabidly sexist and clueless response was included in the study. If a man wrote, "Yay women in science - but they face sexism all the time" without indicating his sex, his more enlightened response was thrown out.

That's right: the authors did not systematically evaluate the major variable of interest - respondent sex - in an unbiased way. As a result their analyses were potentially biased to include people who mention their sex when writing about sexism. If, for instance, male sexists are more likely to mention that they are men, then the study conclusions are completely invalid. 

The authors do cop to some of the study limitations, but not this one. They write, regarding study limitations, that the study approach,

necessitated the usage of a non-random, self- selected sample of commenters. ... Commenters were limited to those who had access to the Internet, were sufficiently interested in bias in STEM to read an article on the topic, read one of our three selected articles, and decided to leave a comment. As such, although our sample may be representative of the focal underlying population of individuals who are interested in such issues, as well as read and comment on such articles, these commenters may not be representative of other groups.

True enough. As a man in STEM, I hope they aren't representative of me! 

But more centrally, can the authors really conclude they know how aware men (or even men online) are of sexism in STEM if they do not even know how many of the original 831 respondents are men? (Not to mention the full group of men who read the articles in question in the first place.) Similarly, how can they conclude that men are more likely to be skeptical of sexism in STEM fields if they cannot compare all men's responses to all women's responses.

Short answer: they can't. 

What is sad is that the authors are able to cite a bevy of other articles that DO suggest some men in STEM fields are sexist, and that some men deny that sexism exists. I don't doubt it: I've seen both, and wouldn't argue otherwise. 

Here's a random sampling of anonymized Twitter commentary about the findings:

5.jpg

Reading this I realize that stepping into this issue could leave me with egg on my face or something stinky on my shoe.

I would guess that the lived experience of these and other commentators - like mine, let me emphasize, wait, here are some emphases: like mine - supports the idea that some men in STEM disregard sexism even when it's staring them in the face.

With such preexisting beliefs - realities! - appear to be confirmed by a study like this one, it's easy to turn off our critical thinking skills and trot it out for the world to see. The risk of course is that the clear problems with study structure, platinum-plated as the authors' intentions surely are, come to light, the ensuing discussion can undermine the true message we were trying to convey. 

We should call out and fight sexism in STEM fields. It is pernicious, archaic and it must go. While fighting that noble fight - which we are winning by the way - we should not undermine the effort by basing conclusions on flawed data.

 

This blog was cross-posted at KevinMD.

Posted on January 9, 2015 .

Why we shouldn't say we have a "cure" for HIV until it's really true

The Berlin patient, Timothy Ray Brown, is historically unique - he is the only person ever truly cured of HIV. 

But in recent years scientific journals and the popular press alike have published multiple claims of HIV cures. From the French "functional cure" to the Mississippi baby, we have seen the word "cure" used a lot -- as well as vague synonyms for it like "cleared" and "HIV-free" -- and yet each time we've had to walk the hype back. 

Check out my new post over at The Conversation on why we shouldn't overhype HIV "cures."

Posted on December 16, 2014 .

Doctors Should Not Deny Ebola Patients CPR

The first time I did CPR, coagulated blood spurted onto my new white coat from a wound in the patient’s chest. Another time a patient’s urine soaked through the knees of my pants as I knelt at his side.

Even in the best of conditions, cardiopulmonary resuscitation (CPR) is a spit-smeared, bloody business that can expose health care workers to all kinds of body fluids. Like all health care workers, I put on gloves and a game face and accept such things as part of patient care.

The 2014 Ebola outbreak changes all that. It is much more dangerous for clinicians to resuscitate patients with Ebola. As a result, should we skip CPR altogether? Bioethicist Joseph Fins of the Weill Medical College of Cornell University recently suggested we should.

I disagree. See my rebuttal at Health Affairs. What do you think?

Posted on December 11, 2014 .