While artificial intelligence has long held an almost fictional status which has seemed long on promise, in actuality it is increasingly defining superior ways of managing complex data.
“Artificial Intelligence (AI) has gone through many hype cycles (since Alan Turing first described the concept in the mid-1930s) and we are now coming back to another hype element,” Jake Rigby, research and development lead for Defence and Security at BMT, told Armada International. One merely had to walk around the exhibition floor of Sea Air Space 2022, or attend its various conferences, to see the full extent of this hype: AI was almost ubiquitous.
Yet, as with most hypes, the gap between expectations and reality needs to be carefully assessed, and the devil is in the detail: How does AI relate to Machine Learning (ML) and Deep Learning (DL)? How are these concepts being applied to naval warfare? And, just as importantly, what safeguards are navies putting in place to promote human-machine teaming and not human-like machines?
Separating Fact From Fiction
As is often the case with buzzwords, AI has been taken to mean many different things in the context of warfighting. The conflation of AI with ML and DL has created the perception – and, for some, the fear – that tomorrow’s AI-enabled systems will replace humans in the battlefield. In reality, the relationship between these three concepts is similar to the notion of Russian Dolls: DL is a subset of ML, itself a subset of AI.
In the introduction to their book AI at War – How Big Data, Artificial Intelligence, and Machine Learning are Changing Naval Warfare, Sam J. Tangredi and George Galdorisi categorise AI in three groups: simple, narrow and general.
Simple AI is, effectively, automation. As the authors note, “[it] refers to machines that can do calculations through analogue or digital functions faster than most humans do,” and can make decisions based on those calculations. Narrow AI refers to
machines that can “learn and self-programme, but only to perform a narrow or specialised range of activities.” Finally, general AI, also known as strong AI, refers to what people commonly understand as AI, that is, systems that can perform tasks mimicking human intelligence.
ML and DL are enablers for the different levels of AI. ML is a process of trial-and-error through which AI-enabled machines label large volumes of data as they continue to learn and modify their decision-making. Both narrow and strong AI include different degrees of ML. DL is the process of creating layers of algorithms on the basis of ‘if-then’ statements and pieces of labelled data; “any system involving more than three layers is considered to constitute ‘deep learning’”, Tangredi and Galdorisi explain. DL enables strong AI.
Unlocking Potential
According to Matthew Caris, senior director at Avascent, “navies and warships in general have had a high degree of automation for a long time, with the most common place use of AI being in the Combat Management System (CMS)”. When placed in automatic mode, these systems can detect a target, identify, classify and prioritise before deploying the weapon to engage the target. It is worth noting, however, that these systems are seldom put in automatic mode; the human is always in-the-loop for the final decision. Automation, as enabled by simple AI, is also widely used by navies for ship machinery controls and propulsion. In a context where ship crews are continuously decreasing, AI provides critical maintenance and operations support.
Beyond automation, narrow AI is widely used by navies for decision aids. As battlefields grow in complexity, characterised by faster, varied and numerous threats, it is increasingly difficult for crew to process and comprehend all the potential threats. The advantage of AI in this context is its ability to process and label data much faster than humans. As such, it can sort through the clutter of the battlefield, allowing system operators to focus on the real threat. “Within the French Navy (Marine Nationale), AI is used to provide support to system operators, processing all the information to ease crew cognitive load and enabling them to focus on decision-making and key tasks,” Jean-Pierre, AI specialist for the French Navy, told Armada.
Finally, as navies continue to move toward unmanned systems, narrow and, eventually, strong, AI will play a significant role in enabling these systems’ autonomy. From navigation – alone or in swarms – to detection, identification and classification, autonomous systems will continue to carry out a growing number of tasks that are currently unsafe for humans. Mine Countermeasure (MCM) systems are a good example of this progress, though currently they mostly run on narrow AI.
Ones and Zeros
Despite the considerable benefits AI can bring to the battlefield, a number of obstacles remain for the full integration of narrow and strong AI across a wide number of systems in the fleet. As noted by Tangredi and Galdorisi, while AI may be seen as superior to humans in its ability to process information much faster, “it is still the product of collective human endeavour [with] its limits, costs and risks.” The data these systems rely on to learn and re-programme themselves is a critical part of these risks.
“A particular challenge for AI in defence applications is the scarcity of accessible data to learn from,” a spokesperson from BAE Systems told Armada. “High fidelity data is often classified and as defence is often about preparing for the worst, data relating to some of the most important types of event for learning simply do not exist as they have never happened.”
The lack of (access to) sufficient data to train ML and, most importantly, DL systems is at the heart of what has been preventing navies and, in fact, armed forces in general, from adopting strong AI. Alessandro Massa, head of research laboratories at Leonardo, explained that the most significant hurdles are certification and reliability of AI for safety and mission critical systems. AI needs to be trained in all the different environments it will be operating in so as to be able to fully comprehend and distinguish threats; “the smallest chance of mistake, even one percent, can cost lives,” Massa added.
For the BAE Systems’ spokersperson there are therefore two challenges: firstly, connecting innovation with high fidelity data sets and, secondly, creating alternative approaches that mitigate the lack of certain data sets. For Naval Group, Loic Mougeolle, director of the Naval Innovation Hub, said that “data collection and sharing issues in relation to AI are so vast and complex that Naval Group has chosen to address them in consortia with other partners.” They are currently doing so through the Confiance.ai consortium.
As for the French Navy, it is in the processing of establishing the Maritime Data Service Centre (Centre de Service de la Donnée Marine – CSD-M). Based in Toulon, and opened in 2022, the CSD-M created new structures dedicated to data collection and valorisation: data collection is focusing on gathering the data generated by all the fleet’s digital systems; the data will then be labelled in order to facilitate ML and DL. The end-goal is the development of datasets that AI systems will rely on to learn about operational environments, threats, etc.
Trust Thy Neighbour
AI holds great potential for navies, and some of that potential is already being realised in systems such as CMS or for specific missions such as mine counter-measure (MCM). As narrow and strong AI continue to evolve as a result of efforts to provide learning data, this technology will become truly ubiquitous.
Yet, while man-machine teaming is undoubtedly the way forward for more efficient naval operations, one significant obstacle remains: the ability of crew and commanders to trust the result. For instance, as Caris noted, “when relying on a CMS to have targeting information, sorting through information to determine what is real versus what is being jammed, there is a significant element of trust that the system is not being fooled.” Adversaries may tamper with the data fed into AI-enabled machines at many different levels, from the moment they are being trained to the moment they are gathering information on the battlefield to provide decision-aids. Yet as Tangredi told Armada: “how do you create a machine that knows not to trust itself because it might be being deceived by the enemy?”
The use of AI systems to enable autonomous systems, as well as the role of AI in asymmetric warfare – including cybersecurity – is explored in AI’s Achilles Heel article.
by Dr. Alix Valenti