|
|
||||||
|
|
||||||
|
From Cradle to Contagion |
||||||
|
Editor's note:
Author: Bruce Lanphear, MD, MPH Adam Kucharski’s July 2023 Substack post yanked me back to my early days in public health—when I first stumbled on the idea of herd immunity. It felt like discovering a secret code: one number that could predict whether an epidemic would fizzle out or explode. Epidemics are often compared to fire. A disease smoulders until conditions are favorable, then it becomes epidemic… spreading like fire in dry grass. The metaphor stuck—and for good reason. Like fire, a virus needs fuel. In the early 1900s, epidemic science was still in its infancy. Researchers like John Brownlee blamed outbreaks on the microbe’s “infective power”—its virulence. But others saw a more complex picture. The Epidemic Triangle In his 1928 Cutter Lecture at Harvard—published decades later— Wade Hampton Frost, the first professor of epidemiology at Johns Hopkins, offered a broader theory. Epidemics, he said, depend on three factors: the microbe, a susceptible host, and an environment that brings them together. Remove one, and the chain breaks. “Epidemics would die out,” he said, “for lack of susceptible hosts.” But how many people had to be immune to a virus like measles to stop its spread? William Hedrich of the U.S. Public Health Service tackled that question. Studying measles outbreaks in Baltimore from 1900 to 1931, he pieced together estimates of immunity using birth records, school files, case reports, and death registries—flawed and incomplete as they were. He found that measles transmission slowed when 55% of children were immune. Later studies pushed that number higher—first to 70%, then 76%, 85%, and eventually 95%. Was Hedrich wrong? Or was herd immunity more complicated than a single number? In 1971, epidemiologist John Fox issued a warning: “Simple thresholds don’t capture herd immunity in diverse populations.” Percentages matter, he said—but so does where, and among whom, immune individuals live. "No matter how large the proportion of immunes in the total population,” Fox wrote, “if some pockets of the community, such as low economic neighborhoods, contain a large enough number of susceptibles among whom contacts are frequent, the epidemic potential in these neighborhoods will remain high....” That truth revealed itself—dramatically —in a later epidemic. The Measles Mystery of 1989–1990 In the pre-vaccine era, measles was an unavoidable part of childhood. During the baby boom, cases soared—millions of children were infected each year. In the early 1960s, measles was hospitalizing nearly 50,000 Americans annually; hundreds died. It wasn’t until the measles vaccine was licensed in 1963 that the tide began to turn. After a decade of record-low measles cases, the US was hit with a resurgence in 1989 and 1990. Hospitalizations rose. Children died. It wasn’t a return to the pre-vaccine era, but it was serious enough to launch a national investigation. The usual suspects didn’t explain it. The virus hadn’t changed. The vaccine still worked. Coverage among two-year-olds had held steady—around 65%—for more than a decade. There was no spike in imported cases. And the now-infamous paper linking MMR to autism hadn’t yet been published. So what happened? The answer was buried in the birth records. A baby boomlet.
Births surged in the late 1980s, creating a large pocket of infants—too young to be vaccinated—big enough to reignite the fire. The virus didn’t change. The number of susceptible children did. Infants under one, ineligible for the vaccine, took the hardest hit. Not because parents refused vaccines, but because there were more babies than expected. From 1980 to 1988, infants under 12 months accounted for 8% of measles cases. By 1990, their share had jumped to 17%—the highest ever recorded in that age group. And it wasn’t just infants. The number and density of unprotected preschool and school-aged children grew, even though vaccination rates stayed flat. The Polio Paradox Decades before the measles resurgence, a similar pattern unfolded. In the 1950s, the baby boom turned polio from a lurking threat into a national crisis. U.S. births surged from 2.3 million in 1933 to 4.3 million in 1957—an 87% increase. That surge created dense pockets of young, susceptible children: perfect fuel for polio. Each summer, the virus found its spark. Epidemics flared across North America. A study in PLOS Biology by Martinez-Bakker and colleagues, showed that polio’s rise wasn’t caused by a more virulent virus or declining hygiene. It was demographic. The baby boom created the critical mass needed for repeated, large-scale outbreaks. Their study also revealed how silent transmission—virus spread without visible illness—kept the fire smoldering between epidemics. The rhythm of polio wasn’t just shaped by sanitation or climate, it was driven by how many children were waiting to be infected. The virus hadn’t changed. The population had. Strategy, Not Simplicity Herd immunity isn’t a magic number. It’s a moving target—shaped by who we are, where we live, how we gather, and how fast new babies are born. In 19th-century London, smallpox spread through an unbroken chain of newborns and young adults arriving from smaller towns. To stop transmission, most of the population had to be immune. But in smaller towns, outbreaks often fizzled out for lack of fuel—even when fewer than half the population was immune. The denser the city, the higher the bar for herd immunity. In Nigeria, short on vaccines, William Foege pioneered ring vaccination: find a case fast, then vaccinate everyone nearby. This approach wiped out smallpox in eastern Nigeria, even with vaccine coverage below 50%. Indeed, while many experts were convinced that 80% immunity was necessary to eliminate smallpox, evidence suggests that both lower population density and higher vaccine coverage interrupted transmission. National or state vaccination rates miss the point. What matters is where unvaccinated children live—and how closely they interact. That’s why epidemic potential, a measure of susceptible host density, may be more useful. It captures both the number and the clustering of vulnerable children, giving a clearer view of when and where outbreaks are likely to ignite. With better tools to map these susceptible pockets we can improve our forecasts. Epidemic potential helps explains why measles rise and fall, even when vaccine rates seem stable. It’s not just about hitting 95%. It’s about knowing where susceptible children are clustered. An epidemic is choreography: sparks and kindling, index cases and vulnerable hosts, coming together under just the right conditions. Learn the rhythm, and we can predict not just when outbreaks will end—but why they begin. A Lifesaving Tool Herd immunity began as a theory. Then it became a strategy. In the battles against smallpox, polio, and measles, it proved to be a lifesaving tool. We still argue about vaccine effectiveness and breakthrough infections. But the more important question is this: Where are the pockets of dry kindling—the susceptible children—building up? Because that’s what decides whether the next fire smolders…or explodes. On The Origins of Herd Immunity After my Substack post on herd immunity, I got a note from David S. Jones, a Harvard medical historian. He reminded me that historians have been debating its origins with as much vigor as epidemiologists’ debate everything else. I wasn’t writing about the early roots of herd immunity at the time—but once he pointed me to them, I couldn’t resist sharing. David’s article, A History of Herd Immunity, traces the term back to veterinarians. In 1918, George Potter explained it plainly: “Herd immunity is developed by retaining the immune cows, raising the calves, and avoiding the introduction of foreign cattle.” Even earlier, in 1894, Daniel Elmer Salmon (yes, the man for whom Salmonella is named—though his assistant Theobald Smith discovered the bacteria) used “herd immunity” to describe the collective resistance of animals to disease. Salmon thought it could be strengthened through good breeding, sanitary conditions, and scientific nutrition. By the 1920s, British epidemiologists W. W. C. Topley and G. S. Wilson borrowed the idea for humans, framing herd immunity as a way whole communities might become invulnerable to epidemics. Around the same time, Sheldon Dudley studied diphtheria outbreaks in a British boarding school and came closer to what we mean today: the idea that once a certain percentage of a group is immune, the whole group is protected. So while herd immunity became famous during COVID-19, its roots run deeper—and quirkier—than most of us knew. Cows, boarding schools, bacteriologists and epidemiologists all played a part. Thanks to David for nudging me down this historical rabbit hole. Herd immunity remains one of my favorite topics, and it turns out the herd itself was there from the very beginning. ■ This article was originally printed on Substack in Plagues, Pollution & Poverty. To read more content from this source subscribe to Plagues, Pollution & Poverty: https://tinyurl.com/ymabpyt7
|
||||||
|
|
||||||