Difficult Questions

Artificial intelligence is a reality in the electronic warfare domain. Its use will only grow in the coming years, raising ethical questions on how it should be employed to aid electromagnetic manoeuvre.

Artificial intelligence is a growing reality in electronic warfare. Its uptake poses some ethical conundrums that will need to be tackled.

“This is why no one likes moral philosophers,” is a recurring line in the Netflix comedy series The Good Place. One of the characters, Chidi Anagonye, has such a profession. He frequently reflects on moral dilemmas experienced by fellow characters.

At first blush, it may not be obvious as to the role of the study of ethics in Electronic Warfare (EW). Nonetheless, the onward march of Artificial Intelligence (AI) in EW raises ethical questions. AI is cognitive electronic warfare’s foundation. Cognitive EW is a huge field. Put simply, it seeks to deepen electronic warfare automation. This is to reduce the burden on the human operator while quickening reaction times. AI is at the core of this approach for providing decision-making support. This can be in the form of AI taking some decisions on behalf of the human operator or aiding the latter’s decision-making.

The incorporation of AI into electronic warfare is not risk free. Kar Heng Lee is the founder and director of the TBSS Centre for Electronics Engineering in Singapore, and TBSS Radar Technology and Monitoring in Vietnam. In September 2021 he presented a paper on the legal and ethical aspects of AI used in radar and electronic warfare. The paper was presented at the Association of Old Crows’ Asia virtual summit. Dr. Lee observes that “when designers develop (AI) algorithms, their design objectives are to produce the best AI training algorithm.” This may yield “very smart AI” algorithms that learn very efficiently. Dr. Lee argues that one must always keep watch for AI being programmed “to learn bad things”. This could occur either by accident or design.

Asimov’s Laws

He cites the example of a self-driving car using AI to make decisions during car’s journey. What if the software makes the car avoid a cyclist, but moves the vehicle dangerously into oncoming traffic? Dr. Lee says such scenarios are even more critical in war where rapid life and death decisions are frequent. He argues that the Three Laws of Robotics provide a useful template. These laws were devised by the author Isaac Azimov. They are thus: A robot must not injure a human or allow a human to be harmed through inaction. A robot must obey all human orders unless they contradict the first law. Finally, a robot must preserve their own existence as long as this does not contradict the first and second laws.

Issac Azimov’s Three Laws of Robotics provide a useful template for the use of artificial intelligence in numerous defence applications including electronic warfare.

Dr. Lee argues that AI should only be used in target detection, tracking and jamming, but not in firing. This last action, he posits, must always be performed by a human: “This is my cannot-be-compromised law,” he stresses.

However, even the jamming task can pose ethical dilemmas. Imagine that Country-A is performing an air defence suppression campaign against Country-B. During the campaign Country-A has enjoyed moderate success. Much of Country-B’s integrated air defence system has been badly degraded. Several ground-based air defence radars have been neutralised electronically and kinetically. To address this deficit, Country-B starts using its civilian air traffic control radars to direct its fighters. These radars could be considered a legitimate target. Although civilian infrastructure, they are now assisting Country-B’s military. Country-A could jam the radars but this risks compromising Country-B’s civilian air traffic safety. Should AI have a role to play in making this decision? If so, how should the AI be programmed?

The International Approach

What is the solution to ensuring that the application of AI in electronic warfare follows Dr. Lee’s suggested ethical principles? He highlights the risk that “ethical issues may at times be overlooked or neglected” during the development of new systems or technologies. Perhaps one solution is to ensure that ethical dimensions are tackled from the outset and continually monitored during research and development.

At the strategic level, a global agreement on the ethical dimensions of AI, not only in military technology, but across the board would provide a framework within which engineers and scientists could work. International conventions on chemical, biological and nuclear weapons, and landmines are useful precedents. Not every nation will sign up and AI conventions will be far from perfect. They would at the very least provide a set of guidelines. Countries refusing to sign up would see their all-important diplomatic soft power suffer as a result. Dr. Lee believes this would be an important first step to addressing an unavoidable question. After all, “the establishment of global AI laws or guidelines for weapons and defence equipment” will benefit humankind.

by Dr. Thomas Withington

Previous articleThe Grant Agreement: FAMOUS consortium led by Patria to start the development of the Next Gen Armoured Vehicles
Next articleRussian Radiation
Editor, Defence commentator, journalist, military historian.