Facebook pixel for internal purposes
Banner for Templeton World Charity Foundation, a CORE partner
Can Artificial Intelligence Help Us Make Better Decisions During a Crisis?

Can Artificial Intelligence Help Us Make Better Decisions During a Crisis?

We are facing an unprecedented public health crisis, leaders are rationing critical supplies, and doctors are increasingly forced to choose who will live and who will die. With a limited number of ventilators, who gets one and who goes without? Should this patient be admitted or sent home?

A simple answer to this question may be whichever patient is most urgently in need. Yet a closer look reveals a thicket of conflicting ethical considerations. Some patients may need a ventilator faster because of unique traits of their conditions, while others may need to continue supporting young children. Why should the rich and famous get faster access to testing? Are younger patients more deserving of a ventilator than older patients? What priority should the disabled and vulnerable have?

Even in more normal times, doctors and hospital administrators are called upon to make decisions quickly, while also keeping all of these ethical considerations and more in mind simultaneously. After many hours of high intensity work, the doctor’s own cognitive resources may be impaired by sleep deprivation and fatigue. These sorts of ethical problems are persistent in fields such as medicine and hospital ethical review boards — made up of medical professionals and expert ethicists — struggle with these dilemmas, even without added factors such as time pressure and sleep deprivation.

Yet we may be on the brink of a revolution in ethics. New research and analysis from leading computer scientists, ethicists and psychologists indicates that artificial intelligence tools — if created with the proper parameters at its outset — could prove instrumental in improving people’s ethical decision-making, particularly in complex or high pressure situations.

It may seem counterintuitive to argue that AI may help us become better ethical decision-makers. In popular culture, the annals of science fiction, and indeed in the real world today, AI tends to be seen as either a tool of villains or a force that inevitably and implacably turns against humanity. After all, in the Terminator films, it’s an AI called Skynet that rains down nuclear destruction on the world and seeks robotic domination. In The Matrix, the AI seeks to enslave people’s minds. And perhaps the most famous AI of them all, 2001: A Space Odyssey’s HAL 9000, is bent on destroying his human operators.

Recently, we have seen electronic surveillance deployed by Chinese authorities to control the spread of coronavirus, to the consternation of democratic countries. Even Elon Musk, the leading proponent and innovator in the field of self-driving cars, has argued that unregulated AI may be more dangerous than nuclear weapons. But that is only one vision of artificial intelligence’s place in society. For every way that it can be used for personal gain or the detriment of society, AI could have a positive impact on our lives.

In a Templeton World Charity Foundation (TWCF) sponsored project, Duke University ethicist Walter Sinnott-Armstrong and neuroscientist Jana Schaich Borg have teamed up with psychologists and computer scientists at the university to investigate ways in which AI can be used to aid ethical decision-making.

Sinnott-Armstrong argues that artificial intelligence, if trained with the right kind of data, could be a valuable aid in making complicated ethical decisions. Rather than being given control of the decision, the AI is able to learn the patterns of ethical thinking that humans engage in and replicate those with new data sets, only without the interference of outside distractions, sleep deprivation, or complicated emotions that might cloud human decisions.

Machine-learning systems can counter fatigue, bias, and confusion. In short, these tools may help doctors and hospitals better live up their own expressed ethical standards.

Theoretically, an AI could take the same information a doctor has about her patients, and generate a series of suggestions which could then be used to inform human decision-makers. Instead ‘outsourcing’ moral decisions to machines, these new tools serve to enhance our innate capacity for moral decision making..to keep reading this blog, please click here for the full post on The Templeton World Charity Foundation website.

TAGS: Templeton Universe, Stories of Impact, Stories of Impact Podcast

Return to Templeton Universe page

CORE iOS App CORE Android App