Subscribe
Designed to kill: the ethics of AI and warfare
In bloom.
Published
27 July 2023
The ethical implementation of AI should be top of mind for product people. The flashpoint of AI-led warfare serves as a distinct reminder of our ethical responsibilities with emerging technology.
An AI-enabled drone goes rogue in a simulated mission to attack predetermined sites. Suddenly, as the simulation progresses, the drone stops following commands. When the human pilot gives a “no-go” order, the drone interprets this as interfering with its primary mission. The drone ignores orders and does the unexpected. It turns on the pilot—killing them. 
This event was initially believed to have happened in a U.S. Air Force (USAF) simulation. But after widespread concern, Col Tucker Hamilton, the Chief of AI Test and Operations, USAF, who shared the story, had to clarify his comments. He now states the simulation never happened and was nothing more than a hypothesis. The question is—could this hypothesis come true?
Until recently, the idea of the rise of robots or an AI-led war was a tale from science fiction. Most will remember a chilling scene in Stanley Kubrick’s 2001: A Space Odyssey where HAL, a programmed computer, won’t let Dr. David Bowman—locked out of his spacecraft—back in. “I’m sorry, Dave. I’m afraid I can’t do that,” HAL famously says. 
Now, with the global acceleration of AI, these ideas don’t seem so far-fetched. Autonomous robots and weapons could be the next—potentially concerning—frontier. 
Korea and the European Union are developing autonomous weapons—something that many believe is cause for concern.
Is an AI drone-led war a real possibility? 
AI is impacting the world on a global scale. Goldman Sachs predicts that 300 million jobs could be lost or degraded due to AI. While research from OpenAI shows that generative AI will impact 80 percent of workers
As technological advances find their way into weaponry, so do concerns about their use. Autonomous weapons are independent of humans. They’re self-operating systems that attack places or people during warfare. In fully-autonomous scenarios, once activated, they require no further input from a human. That means human reasoning, empathy, and decision-making is lost. 
Dr. Ingvild Bode, Associate Professor of International Relations, Centre for War Studies, University of Southern Denmark, and Principal Investigator of project AutoNorms on weaponized AI, believes people should be concerned about the development of AI-integrated weapons.  
As an example, Dr. Bode highlighted loitering munitions—expendable aerial systems that use machine-based analysis to identify, track, and attack targets. While these systems are currently operated with human support, there is a threat that they will soon be operated with latent capability—that is—no human intervention.
While she notes the current developments of AI weapon systems are unlikely to lead to an “AI-led war,” there’s still significant risk associated with such technologies. 
“I really want to highlight the short-term risks attached to current-level AI in weapon systems rather than the long-term risks associated with what is often referred to [as] existential threats.”
“Even integrating autonomous and AI technologies into targeting and military decision-making today has created significant uncertainties about the extent to which humans can remain in meaningful control over the use of force,” she said.  
How can we prevent an AI-led war? 
With the rapid rise of AI technology, well-known experts are calling for fast action to avoid the potentially dangerous consequences of AI becoming too powerful too quickly. 
Elon Musk, Steve Wozniak, and 1000 AI experts have signed Pause Giant AI Experiments: An Open Letter advocating for a halt in training AI more powerful than GPT-4 for six months—until we understand more about the technology. The reason? The “profound risks to society and humanity.”
Following the open letter, many doctors and health professionals joined the call to stop AI research and development without legislation. In the BMJ Global Health journal, they claim that AI poses an “existential threat to humanity.” In the journal, the rise of autonomous weapons is one of the three main threats they highlight. 
“LAWS [Lethal Autonomous Weapon Systems] can be attached to small mobile devices, such as drones, and could be cheaply mass-produced and easily set up to kill “at an industrial scale,” they wrote. 
With the short-term risks seemingly very real, how can we globally prevent the potentially serious risks associated with AI weapons? Legislation may be the critical answer. But to drive legally binding rules—the people who work closely with AI must articulate and enforce ethical standards and constraints when building with this technology. 
The critical need for socially responsible design 
Looking at the broader societal implications of design is more critical than ever in developing AI tools. It may be the biggest ethical challenge designers, researchers, and developers of our time will face. 
Regardless of whether AI is integrated into a weapon, all autonomous technologies should be developed with strict ethics in mind. Designing ethically means avoiding tools that would harm humans, the environment, or other living species.  
Rather than waiting for the powers that be to set out guidelines, one could argue that the onus lies with the people and organizations building such products. A culture of ethical leadership, decision-making, and an open dialog about ethical concerns ought to be integral for those people. 
As product practitioners working with AI, we are responsible for learning, discussing, assessing, and acting in ways consistent with socially responsible ethics—after all, as we produce: we may be changing the future for better or worse. 
Progress through public advocacy 
Historically, social progress happens when enough voices tip the balance. Take any social issue—the right for women to vote, the legalization of same-sex marriage, protection of the natural environment—and you’ll find public advocacy at the core of progressive changes. 
Guidelines and best practices may not go far enough to limit organizations when financial and political rewards are at play. There may be greater possibilities to enforce socially responsible practices in AI and create rigid legislation if enough members of the public support it.
The difference comes down to us—the people, particularly those most familiar with the technology—and what we believe will ensure the safest and most positive future for ourselves and future generations. With enough voices, we can drive essential legislative change that will hold people and organizations accountable. 
Regulating how AI is created and used 
Legislation is likely the most critical step to avoid both AI’s short- and long-term risks. Governments have a responsibility to regulate the use of AI to avoid fully autonomous weapons, among other potential risks, becoming a problematic reality. 
OpenAI chief executive Sam Altman while testifying before Congress, claimed regulators must set limits on AI systems. He said that AI could cause “significant harm to the world” and noted, “If this technology goes wrong, it can go quite wrong.”
Many experts agree that international legally binding rules are also critical. Currently, the Council of Europe (CoE), an international human rights organization, is developing an International Convention on AI. The convention aims to “agree that AI systems which pose unacceptable risks to individuals should be prohibited and that there should be minimum safeguards to protect individuals that may be affected by AI systems.” With 46 state members currently, this convention may take the form of a global treaty. 
But, Dr. Bode points out international regulations can be slow, and steps need to be taken now. She recommends on a national level that states create binding regulations to protect against dangers. She also believes states should be transparent about integrating AI into weapons.
In the U.S., 43 bills have been introduced to regulate AI development and deployment. To date, though, few of those bills have become legally binding. Pending legislation exists in the EU, while China has enacted a degree of AI regulation. Many other countries are considering legislation.  
The development of responsible AI Principles 
Developing guiding ethical principles that individuals, corporations, and nations can refer to and align with is also critical. This may help guide nations as they legislate, provide requirements for organizations that develop AI, and be a reference point for individuals as they navigate the new technology. 
Australia, for example, has created eight Ethics Principles “to ensure AI is safe, secure, and reliable.” Meanwhile, the Global Index on Responsible AI is currently being developed to support the safe implementation of AI globally. 
If these principles are not legally binding, however, there’s no guarantee that nations will follow them. As Dr. Bode stated, “At the moment, there is a clear gap between the few principles that exist and their implementation. The statement that autonomous or AI technologies will be used in an accountable, governable, [and] responsible manner is important. But what will this look like exactly?” 
Designers, researchers, and developers acting ethically, public advocacy, and legislation at the national and international levels are essential steps forward as this technology accelerates.
In the words of Dr. Bode, “We should go beyond the ‘let’s wait and see approach’ if we want to avoid highly problematic consequences.” 

Subscribe to Outlier

Juicy, inspiring content for product-obsessed people. Brought to you by Dovetail.