AI’s Achilles Heel?

Naval Group AI
”In the roadmap for its 360º optronic watch system, Naval Group includes the introduction of powerful AI algorithms to provide commanders with critical decision making support"

Artificial Intelligence is promising to speed up the analysis of data to further tighten the OODA loop for commanders. But does it also have a weakness?

In his Strategic Vision, published in October 2021, the French Army Chief of Staff (Chef d’État Major des Armées – CEMA), Thierry Burkhard, wrote that ‘the objective is to win the war before the war…”. To achieve this, armed forces must contribute to the understanding of opponents’ capabilities and intentions so as to constantly offer pertinent military options to political decision-makers.

The growing use of asymmetric warfare tactics by peer and near-peer adversaries is significantly challenging this vision. At sea, the quantity and pace of unmanned systems, fast boats and cyber warfare tactics are increasingly challenging the ability of commanders to retain superior maritime domain awareness (MDA). In such context, artificial intelligence (AI) has the potential to become a real game changer, offering fast data processing for a quicker understanding of ‘pertinent military options.’ Yet it is also creating new ground to practice the art of deception: if critical data is susceptible to tampering, how can commanders decide what is true?

Asymmetric Warfare 101

“Asymmetric threats are, by definition, unconventional, often delivered through cheap means and relatively easy to carry out,” Catherine Vanderpol, Asymmetric Product Manager Naval Group, told Armada International. The element of surprise is critical to asymmetric warfare, as the threat remains concealed to the very last minutes through deception or speed. “This leaves very little reaction time in which to understand the threat and decide on a course of action,” Vanderpol continued.

Long associated with non-state actors such as resistance fighters or terrorists, asymmetric warfare has now fully entered the realm of peer and near-peer competition. It is characterised by the hybrid nature of the enacted threats including both symmetrical warfighting and covert, subversive or irregular asymmetrical approaches, creating ambiguity and confusion for the opponent. Just as importantly, it takes place in the grey zone, just below the threshold of an armed conflict, requiring a comparatively small military footprint, concealing real capabilities and containing political costs.

In the maritime domain, asymmetric warfare is characterised by an increasing use of unmanned systems whether aerial, surface or underwater. As noted by Frank Christian Sprengel in the Hybrid CoE Working Paper 10, Drones in hybrid warfare: Lessons from current battlefields, published in June 2021: “Unmanned, long-distance precision-strike weapons systems, like drone warfare capabilities, are virtually ideal for supporting [grey zone approaches].”

Fast attack crafts, such as the ones used by the Iranian Islamic Revolutionary Guard Corp Navy (IRGCN) to harass US Navy (USN) vessels in the Strait of Hormuz at the entrance to the Persian Gulf in May 2021, are also typical asymmetric warfare assets.

US naval vessels
Two Iranian Islamic Revolutionary Guard Corps Navy (IRGCN) fast in-shore attack craft (FIAC) conducted unsafe and aggressive runs in close proximity to US naval vessels transiting the Strait of Hormuz, 10 May, 202. Such asymmetric threats challenge a commander’s maritime domain awareness.

Finally, cyber attacks have also become ubiquitous to the world of asymmetric warfare. Whether carried out directly by attacking information technology (IT) systems to hack or corrupt data, or indirectly by attacking the functioning of a ship’s systems. “The first thing an attacker seeks to do in the cyber domain is to leave no evidence and no proof of its capabilities,” French Navy Commander Eric told Armada. As his work is sensitive, we cannot disclose his full name.

The digitalisation of maritime warfare, over the past decade, is both a blessing and a curse in attempting to tackle asymmetric threats.

Naval vessels now feature digital systems including radar, sonar, and satcom, all of which produce large quantities of data “that can subsequently be analysed and leveraged to examine patterns of life and detect suspicious activity and threats,” explained a NATO spokesperson. The unprecedented quantities of data these systems produce, however, significantly challenge human cognitive capacities. When faced with the surprise, ambiguity and speed that characterise asymmetric warfare, quickly detecting threats in such complex MDA picture is akin to looking for a needle in a haystack.

“At crew level, what we observe is that it is challenging for an operator to sit for hours in front of a console, trying to stay alert to the slightest possible threat,” Vanderpol commented; in such context, there is a risk that an operator loses concentration at a critical time when an asymmetric threat appears. “Elaborating an appropriate riposte within a very short timeframe therefore becomes even more challenging.” At commander level, “reference scenarios will become faster, featuring an increasing number of autonomous systems,” Alessandro Massa, head of Research Laboratories at Leonardo, told Armada. “It will become very challenging for any commander to follow the pace and retain superior MDA in asymmetric warfare scenarios.”

Accelerating the OODA Loop

AI has a crucial role to play in supporting crew’s and commanders’ abilities to process the Observe, Orient, Decide and Act (OODA) loop at much higher speed. Connor S. McLemore and Charles R.Clark in their chapter on Practical Applications of Naval AI – An Overview of Artificial Intelligence in Naval Warfare, published in 2021 in the book AI at War – How Big Data, Artificial Intelligence, and Machine Learning Are Changing Naval Warfare, edited by Sam J Tangredi and George Galdorisi, wrote: “AI doesn’t suffer from a limited attention span and has proven particularly efficient at recognising visual and audio patterns.”

Using algorithms that can perform calculations faster than humans (weak) AI or learn from data and self-programme to perform specific activities (narrow) AI, means that the introduction of this technology in the naval warfighting domain holds great potential to deal with the vast quantities of data produced by sensors. “Not only can AI extract more accurate information from individual sensor modalities … it can also greatly assist in combining possibly conflicting information from multiple sensors to increase the accuracy of actionable intelligence,” Dr Jill Crisman, principal director at the US DoD Office of the Director of Defence Research and Engineering for Modernisation, told the author.

For the ‘observe’ aspect of the OODA loop, industry has been working to introduce (weak) AI in sensor technologies as a cognitive aid to provide fast and accurate threat detection and identification. “We want to get operators out of the data loop where they are sifting through vast quantities of data to identify threats,” James Wright, technical director ISRS at Raytheon Intelligence and Space, told Armada. To this end, Raytheon is working with customers to introduce AI-enabled cognitive aids in electronic warfare (EW) systems so that, “intelligent algorithms quickly present operators with the relevant information about their MDA and operators can concentrate on choosing a course of action.”

Similarly, Thales is working on introducing AI in its radar to make them smarter. “A smart radar brings autonomy to the battlespace,” Hervé Hamy, vice president ISR activities at Thales added. The aim is to develop systems that can not only perform automated classification but can also autonomously learn and adjust to their environment to enable faster detection and identification in complex environments. For example, the roadmap for Thales’ AirMaster C radar includes the introduction of a wide range of AI applications.

AirMaster C radar
”Thales will include a wide range of AI applications in its systems to provide operators with cognitive aid, including in its AirMaster C radar.”

As Jean Pierre, AI specialist for the Marine Nationale, told Armada: “At tactical level, a better perception of our environment will facilitate commanders’ decision making as they will be able to act upon much more accurate and rapid MDA information.”

However, providing faster information to the operator is only one step toward providing the quick reaction time needed to tackle asymmetric threats. The ‘orient’ part of the OODA loop is just as important in helping “making sense of the information provided and updating one’s model of the world,” as noted by David Lange and José Carreño in their chapter How AI is Shaping Navy Command and Control, published in the AI at War book.

In the context of asymmetric warfare, AI becomes critical for Command and Control (C2) decision support. It can offer commanders different courses of actions that take into account the status of all the systems, such as Close In Weapon Systems (CIWS), missiles, decoys, etc. “By quickly and accurately making sense of the environment and forming a common operating picture, decisions can be improved and accelerated both at the tactical and operational levels,” Dr Crisman added.

The same can be applied at task force level, as noted by the NATO spokesperson: “Considering the level of complexity of the data coming from the symbiotic integration of multiple complex warships, AI would provide significant decision support to a task force commander.”

In this context, Naval Group is currently developing a 360° optronic watch system to be integrated on a naval vessel. Sensors gather information that is then fused and processed to present operators with actionable information via the Combat Management System (CMS). “The roadmap for this system includes the introduction of powerful AI algorithms to provide additional decision-making support,” Vanderpol commented. Similarly, Leonardo is also progressively introducing AI within its C2 and CMS systems.

Cyber Catch 22

The digitalisation of naval vessels, while a great force multiplier in the context of asymmetric warfare, also has the potential of being its Achilles’ heel in the cyber space domain. “Where today directly attacking a ship might prove complicated, hacking its systems to temporarily disable them or to steal or corrupt their data is much easier,” explained Commander Eric.

A cyber-attack on a ship can happen at multiple levels, not necessarily targeting the systems believed to be the most critical. “Most people will assume that targeting a ship’s propulsion system will have the most significant impact, but the reality is that by targeting systems such as a freezer the damage inflicted by food poisoning on crew is just as crippling,” said Commander Eric. An affected automated system can be fixed by people, but if all the crew are ill… who will take over on the ship?

Cyber-attacks also target the data crew and commanders rely on to make critical decisions. Data can be simply hacked, potentially revealing classified information but also important system health monitoring information. The same data can also be tampered with. Again, it could be critical data coming from one of the sensors, creating ghost threats in one place to attack from another angle or spoofing navigation devices.

“In sum, the single greatest difficulty in cyber defence is that every system could represent a vulnerability and protecting them all is an extremely difficult and time-consuming task,” Commander Eric warned. “The key is detecting the attack before it happens, at the reconnaissance stage when the hacker is looking for vulnerabilities.” Otherwise, once the surprise attack happens, it is already too late.

AI’s potential in this context is evident: AI-enabled systems can be programmed to sift through the overwhelming number of small ‘events’ taking place in the cyber space and detect anomalies. Leonardo’s Security Operation Centre (SOC) aims to address this by monitoring all the ship-to-ship and ship-to-shore digital communications taking place. “We train the AI to recognise any early sign of intrusion to be able to address any potential vulnerability,” Massa said.

But AI can also be deceived through data poisoning. “Its greatest weakness is the pertinence of its analysis when facing unusual scenarios presenting even the slightest variations compared to what it learnt,” Commander Eric warned. An attacker could potentially take the time to poison the training dataset used for machine learning by slightly modifying the features of a ship or missile in a training data. In an operational context, the sensors and CMS would then fail to recognise that ship or system as a threat. Similarly, AI can also be fooled into blocking access to a system that it believes has been hacked, thus affecting the ship with minimum effort for the attacker.

Addressing such issue is complex because it hinges both on the quality of the data – what was hacked and modified – and the timing of this hacking – how far back in time has the training data been poisoned. The French Navy is attempting to tackle this challenge through its cyber defence support centre. Within it, training, SOC and intervention/projection departments seek to increase readiness and strike the right balance between high alert and preventing too many false alarms. “Cyber-attacks are the perfect example of asymmetric threats: an attacker only needs to find one vulnerability to have a significant impact, whereas cyber defence requires constant monitoring and adaptation,” Commander Eric noted. The former is cheaper and easier than the latter.

Operationalising AI

The widespread adoption of asymmetric warfare tactics by peer and near-peer adversaries has had significant impacts on maritime warfare. Multi-dimensional – air, surface and underwater – and multi-domain – maritime and cyber – those tactics are seriously challenging crew and commanders’ ability to retain superior MDA.

The introduction of AI for sensors, sensor fusion and data processing holds significant potential to shorten the OODA loop and support commanders in their decision-making. “It is not just about finding the needle in the haystack,” Josh Jackson, senior VP of Operations and Naval Business Unit Lead at SAIC, an engineering and IT company, told Armada, “it is also about understanding what haystack you want to look into.” Through faster processing and self-learning, AI can help identifying both the proverbial haystack and the needle.

US Navy seaman
A US Navy seaman stands watch as the radar systems controller in the combat information center aboard the Arleigh Burke-class guided-missile destroyer USS Porter (DDG 78), 30 December, 2018. AI should result in faster data processing leading to quicker decision making.

Yet AI’s reliance on datasets to learn and self-programme opens AI-enabled systems to cyber-attacks that could potentially have critical impacts on capabilities and force structure. “In the world of cyber we are constantly adapting to new threats,” Commander Eric noted, and AI’s greatest strength – identifying unusual behaviour – can quickly become its biggest weakness – being fooled by its own algorithms and processes.

As the Chinese General and philosopher Sun Tzu said: “All warfare is based on deception.” In the context of asymmetric warfare, deception is at the basis of the slow process in adopting AI despite all its promises. While industry is developing systems that can accommodate AI algorithms and innovations, navies are taking a rather slow, step-by-step approach, often undertaken across disciplines and between multiple stakeholders to progressively improve trust.

In 2018, the United States DoD established the Joint Artificial Intelligence Centre (JAIC) to harness the potential of AI for defence and security applications. Within the JAIC, the DoD created what Dr Crisman refers to as ‘AI Hubs’, that is, coordination hubs facilitating interdisciplinary research and development. “The AI Hubs will be hosting continuous AI task competitions to create perpetually improving AI solutions that can be then be integrated quickly into mission apps or systems,” Dr Crisman told the author.

Similarly, the French Navy is setting-up a Centre for Maritime IA, an administrative structure that will facilitate interaction and work between the Navy AI training centre, industry and the Navy Chief of Staff office in order to support AI projects. “The aim of the Centre for Maritime IA is to take AI projects with a good Technology Readiness Level (TRL), and support them in maturing faster through those TRLs,” Jean-Pierre explained. “Ultimately, what we are looking for is speeding-up the process, from development to operationalisation, by way of better cooperation between key stakeholders.”

Finally, at NATO level, the NATO Intelligence Fusion Centre (NIFC) is also working on integrating AI technologies. “The NIFC fuses AI to conduct ultimate imagery analysis and explore how the Alliance could further use AI,” the NATO spokesperson added.

The key challenge in the near-term will therefore lie in building trust in, and enabling security by design of, AI-enabled systems. Until then, naval operators and commanders will continue to strongly rely on their judgement in all circumstances, including asymmetric warfare. “I think it also underscores the ongoing importance of the human intelligence and the necessary partnership with machines,” Jackson concluded.

by Dr. Alix Valenti

Previous articleIntegrating Amphibiosity
Next articleRe-Envisioning the Marine Corps
Defence and security freelance journalist.