This weekend there was a fantastic article in the Journal about building antitrust momentum against Big Tech and the rising Techlash of 2016-2019. It’s an incredible read and really shows you how far we’ve come since this puff piece about Chris Hughes’s second “startup”: the ’08 Obama campaign[1].

At the same time, I just finished an incredible book on evolutionary psychology, The Moral Animal. An oldie but a goodie, this book gave me insights about the Theory of Natural Selection that I had not encountered in any of my college psychology courses or other readings (Dawkins, Wilson, Etcoff). It is pretty amazing the play-by-play of Darwin’s life that Wright lays out while simultaneously weaving in an explanation of the theory of natural selection and its many consequences[2].
One passage in particular toward the end of the book really piqued my interest. As Wright details, despite it’s simplicity the Theory of Natural Selection has major implications for one of the most nagging questions humans have asked: is there such a thing as free will? If all of our reward and punishment systems are biochemically-driven, “designed” by natural selection to promote fitness, do we really have any volition in the actions we take? Or are we all just justifying actions post-hoc as if we were a “rider atop an elephant”?
The sustained momentum of the Techlash is, I think, in large part driven by the fact that Americans at some level feel it deeply unfair that Big Tech is profiting off of our deepest impulses in ways that the broader public didn’t understand until now. As an example, a former Facebook executive implicated “dopamine-driven feedback loops” in Facebook’s destruction of our society and comity writ large.
I can’t answer the question of whether we have free will and/or whether it is moral that companies are profiting off of their weaponization of natural selection. But what I want to start to answer is, what if natural selection were weaponized for good? What if all of the addictive impulses of our reward and punishment systems were leveraged by technology to promote social good?
Let’s take the healthcare industry as an example. Most players in healthcare are driving toward the Triple Aim: lower cost, higher quality, better experiences. What if we:
- Simplified Consumer Healthcare Apps to Drive Positive Feedback Loops? People are really embracing the wearables trend[3], but counting steps has diminishing returns above a goal of 7,500 per day. How do we as an industry gamify the treatment of chronic diseases like renal failure, diabetes, and obesity? Companies like Livongo lead the way toward reducing the overall costs of these chronic conditions, which represent up to 90% of all U.S. healthcare spending annually.
- Made Lower Cost Options the Default? Research shows that defaults are perhaps the most important factor in creating long-lasting changes in human behavior[4]. However, when we sign up for health insurance plans, finding coverage and establishing a primary care relationship are among the most difficult things to do! What if during the annual enrollment/renewal process payors automatically enrolled patients at the lowest cost (e.g., CVS HealthHUB, Walgreens VillageMD, local family medicine) clinic to deal with routine disease management conditions? Combining this with the ability to opt-out via a single button click would simultaneously lower cost and preserve consumer choice.
- Provided Better Visualizations for Probabilistic Outcomes? In his book Thinking, Fast and Slow, Daniel Kahneman describes how he and Amos Tversky moved the psychological mainstream from viewing humans as “probability calculators” to “heuristics users”. Subsequent research has shown that humans are notoriously terrible at interpreting probability. How can we advance the visualization aids and tools used in the delivery of healthcare to help people better understand their choices and what outcomes are likely to occur from various treatment pathways?[5]
I’m really excited to see announcements from CVS that they are continuing to disrupt healthcare with the introduction of “HealthHUBs”. Structural changes that improve how we pay for and deliver healthcare is always welcome in my book. However, Wright’s book opened my eyes to the fact that we are going to have to have more nuanced answers for how to fight millions of years of evolution that intentionally drives behaviors that, while once evolutionarily adaptive, are now being exploited by industries from Tech to Pharma to Food/CPG to drive sub-optimal outcomes.
—
[1] For Chris Hughes’s latest reversal into anti-tech crusader see his NYTimes op-ed on breaking up Facebook.
[2] Is self-delusion actually a wonderful trait for natural selection? Likely so. The more authentic a deluder’s belief in their own delusion, the greater their ability to persuade other chimps of this delusion as “truth”. What implication does this have for the definition of “truth”? This is left to the reader to ponder.
[3] For more on this see Apple’s CEO Tim Cook predicting that their company’s greatest contribution to mankind will be in healthcare.
[4] Shout-out to Richard Thaler, last year’s Nobel Laureate in Economics and a teacher at my dear alma mater!
[5] A lot of the momentum in this area was blunted during the push for the Affordable Care Act. Patients confronted with “probabilities” were most often end-of-life patients and their families trying to understand treatment options and/or palliative care. Reviews and explanations of evidence-based medicine protocols were labeled “death panels” and ended up politically DOA (no pun intended). Now that we know that 25% of all Medicare spending occurs in the last year of life, the issue rears its ugly head again.
One thought on “Weaponizing Natural Selection”