The need for soldiers to know their immediate surroundings is essential to both their combat effectiveness and survival.
Basic ‘situational awareness’ on the battlefield comes down to the reliance on, predominantly, an individual soldier’s visual senses. The development of Image Intensification (I2) which uses a photocathode plate that ‘gathers ambient light’ which is then electronically displayed on a phosphor screen in a wavelength visible to the human eye has proved to be a game-changer.
The image is typically yellow-green (using a P-43 screen) or white (with a P-45 screen) although other colours including red are possible. A key advantage is that it is passive emitting no signature itself. Currently, the Generation (Gen) III I2 using gallium arsenide photocathode tubes offer the highest clarity and performance. Gen IV includes ‘gated’ technology that adjusts more quickly to changes in the surrounding light levels. With Gen III not only has the price of I2 has been lowered but both weight and size have been significantly reduced. As an example, the 1996 AN/PVS-2 Starlight Scope weighed 7.5 pounds (3.4 kilogrammes) and was 17.5 inches (700 millimetres) long while today’s equivalent the PVS-14 is only 12.5 ounces (0.35kg ) and 4.5in (114mm).
These factors have seen I2 widely fielded with palm-size hand-held surveillance and weapon mounted sighting versions, head or helmet mounted monocular and binocular goggles. I2 is ideally suited to meeting stand-alone night situational awareness needs of air crews, infantry, and general use. Its reliance on ambient light has become of less of a concern in newer systems with greater sensitivity performing at much reduced light levels. Like human vision, it is hindered by smoke, fog and other obscurants. As in any sight the field of view is narrowed reducing the wearer’s perception and peripheral vision. This can be a particular concern with a head or helmet goggle necessitating constantly turning the head to view a full scene. The introduction of panoramic (also called ‘Quad’) night vision goggles with four image tubes begin to address this expanding the field-of-view (FOV) to 97 degrees in the L3Harris and EOTech models and 104 degrees in Photonis Defense’s version.
Low Light Level TV
I2 tubes display the resulting image directly to the viewer and does not emit an electronic or digital signal. Although ideal for stand-alone applications, it is difficult to incorporate the imagery into a broader situational awareness system. In such applications, Low Light Level TV (LLTV) is often preferred. LLLTV is a CCD camera sensitive in the 0.4 to 0.7 and 1.0 to 1.1 short-wave infra-red micrometre wave lengths which are outside human normal ‘visible’ viewing. It detects objects in extremely low light levels but shows the resulting electronic digital signals. This digital data can then be converted for display, processed, and/or integrated with other sensor information. Typically, LLLTV have been monochrome (black & white). However, the firm SPi’s x27 LLLTV provides high-definition full-colour digital images approaching daylight quality even at low light levels of one millilux! The SIOYX ultra-low colour light camera is used in the US Army IVAS (discussed later). A digital signal from x27 and other LLLTV cameras can be incorporated with other digital imager information including thermal imaging cameras, offering a more comprehensive picture. These cameras can be compact permitting them to be more easily be positioned all-around a vehicle, for example, thereby providing 360-degree surveillance as is the case in Rheinmetall’s SAS – Situational Awareness System.
Thermal Imaging
Where I2 relies on some level of ambient light, thermal imaging detects the temperature differences of objects. The performance of the thermal imager is a function of the sensitivity of its detectors to these temperature differences, referred to as T (Delta T), with sensitivity levels of 0.2 degrees C (0.4F) and better are possible. Thermal imaging does not rely on any outside illumination so it is effective even in a tunnel or building and can view through smoke, fog and vegetation. It is particularly effective in detecting objects with ‘hot’ signatures including vehicles, humans and animals. Displays can be programmed to assign colours to various temperature differences or in monochrome scenes to ‘reverse polarity’ with either white or black showing the hottest objects.
A drawback of thermal imaging has been its lower resolution and lack of finer image details. This can make it difficult for users to identify the specifics of what they are observing. Even at relatively close range, though it might be evident one was viewing a tank for example, the type or if was friend or enemy was not so obvious. The danger was the possibility of firing on friendly forces – something that unfortunately occurred a number of times during battles in Iraq in 2003 which saw the first widescale use of thermal sights on combat vehicles. The developments in two-dimensional Focal Plane Arrays increased the number of pixels giving a higher resolution to address this. For example, a 160×120 array with 19,600 pixels has low resolution, while a 320×240 array with 76,800 pixels has medium resolution and a 640×480 (307,200 pixels) is the highest. These arrays also do not require scanning reducing their complexity. Early thermal imagers also required cryogenically cooling of the detectors which increased size, weight, and cost while adversely influencing reliability. The development of the uncooled array operating at room temperature positively resulted in more compact imagers including handheld and infantry weapon sights. Uncooled 640×480 thermal binoculars and weapon sights are available at under well under 2.2lbs (1kg) and have become more affordable. Actively cooled detector arrays continue to offer the highest sensitivity and resolution and remain preferred where size, weight and power needs are less critical.
The rapid advance in thermal imager technology is illustrated by comparing the US Army Thermal Weapon Sight (TWS), first fielded in 2006, with its current replacement – the Family of Weapons Sights – Individual (FWS-I). At 6.7×3.6×3.9in (170x91x99mm ) and 1.85lb (0.839kg) it is significantly more compact while having increased performance. The later permits not simply detecting a target signature but being able to recognise it around 4,600 feet (1,400 meters! The stand-alone, clip-on FWS is manufactured by firms including L-3Harris and Leonardo DRS, which was awarded the most recent US Army contract in October 2022.
The Enhanced Night Vision Goggle (ENVG) combines thermal imaging technology and the white phosphor I2 to provide dual wave augmented vision to the soldier. The thermal imaging used has a larger field-of-view and is ideally suited to detecting targets in low-visibility including smoke or fog and concealed in vegetation. The I2 binocular can see through glass and allows maps to be read and complements the thermal imaging. With a fused presentation, the soldier receives the benefits of both technologies. In addition, the ENVG full-colour display is compatible with wireless communications, rapid target acquisition and augmented reality. This includes the ability to input the image and reticle of a weapon mounted thermal sight into the goggle, permitting heads-up rapid engagements. The US Army has awarded contracts to bot L3 Harris and Leonardo DRS for delivery of ENVGs. It is intended to be employed primarily by dismounted infantry with initial fielding having begun in 2022.
A step beyond the ENVG is the US Army Integrated Visual Augmentation System (IVAS). An adaption of the Microsoft HoloLens, it not only incorporates night/thermal vision but can present a wide range of information to the wearer. These include passive targeting, navigation, friendly positions, and the ability to share and view data from outside sources. Referred to as ‘mixed reality’ it can combine real views with overlaid simulated objects, terrain or characters. This has benefits allowing its use as a training aid as well as operational tool.
IVAS faced a number of technology challenges including the low performance of its current lowlight sensor as well as image distortion, its limited FOV, and soldier form factors. Although a contract was awarded in 2020 the procurement had been on-hold waiting further testing. However, in September 2022 the acquisition of 5,000 of both IVAS 1.0 and 1.1 versions was approved by the Secretary of the Army. BGen Christopher Schneider, Program Executive Office – Soldier commander, stated that “work is continuing to correct the issues identified in the current IVAS with a version 1.2 already in the works.” Part of this is development of an Advanced Low Light Sensor (ALLLS) an effort for which Elbit Systems received a contract in September 2022. The objective is to have this new sensor sometime after 2025 for an IVAS 2.0.
Vehicle 360 Degree
The greatest weakness in a tactical vehicle, particularly armoured vehicles, is the restricted visibility of the crew, which can hinder mobility. The introduction of very compact cameras opened the possibility of placing them to cover blind spots. This was initially used with specialty protected mine clearance and explosive ordnance disposal (EOD) vehicles and expanded to other combat vehicles. With onboard digital it became possible to increase the number of cameras linked in an onboard network. Airboss Defense Systems 360 SA, for example, includes four fixed 102 degree wide-angle plus a single pan/tilt 36X zoom PTZ cameras with controls to allow the operator to switch views. Copenhagen Sensor Technology’s (CST) Cortex Resolve Platform uses seamless stitching combining images from multiple Citadel high definition, low latency cameras to provide all-around views for combat vehicles. Elbit Defense’s Iron Vision takes a similar approach with both perimeter cameras and other surveillance and targeting sensors, but displays information on a soldier’s helmet mounted visor which draws on its experience with aircrew displays. As Tom Carlisle, director Army Solutions added: “It also provides for integration of external surveillance information such as that from a tethered UAS.” Rheinmetall’s SAS incorporates both day and night sensors while employing automatic target recognition and threat sensors like Acoustic Shooter Localisation (ASLS) and laser warning to reduce crew task load. By linking the SAS with onboard active protection and fire controls it facilitates response to an identified threat.
The introduction of wide FOV drivers vision enhancers like Leonardo DRS’s ESA with a 107×30-degree coverage using three uncooled thermal cameras augmented by side and a rear camera expand on previous drivers’ viewers by offering perspectives covering not only road edges but other hazardous areas.
The benefits of integrated 360-degree surveillance go beyond simply offering a wider vision of a vehicle’s surroundings. It also enhances safety particularly when moving in tight quarters like urban areas and forest. In a networked system all members of the crew and, in the case of infantry carriers and fighting vehicles, the embarked squad members are able to actively monitor the situation outside the vehicle. For the later they are able to prepare themselves and assure they are ready to deploy appropriately prior to disembarking. The latest Situational Awareness (SA) systems are taking advantage of image processing advances are able to offer automatic target detection. This offers the possibility of proving an alert to the crew of a potential hazard or threat. The performance of a SA system is partly based on the sensors, especially cameras utilised.
Ryan Edwards, business development director for Soldier and Vehicle Electronics at BAE Systems explained, “Our 360 MVP uncooled thermal cameras provide 120 degree horizontal by 75 degree vertical field of view with a 1920×1200 pixel pitch delivered at 60hz. The advantages include superior detection range and coverage (fewer cameras needed). The higher frame rate also enables reduced false alarm rate for threat detection applications.”
It is the integration of the various on-board sensors, cameras, and other detection assets that offers the optimum tactical return in a vehicle situational awareness system. Simply adding cameras and displays runs the risk of simply complicating the ability of the crew in performing their functions in fighting the vehicle. Too much information can result in task overload. To address this requires a “smarter” system that manages and potentially processes these various sensor inputs.
One approach is Defensphere’s Vegvisir’s, an Estonian-Croatian firm. Vegvisir combines inputs from on-board sensors and innovative cameras that are synthesised to not only display objects of immediate interest but those as much as six miles (10km) distant. Company CEO Ingvar Pärnamäe stated that it “is a modular system that combines four complementary layers of sensors to ensure awareness in a range of tactical situations”. Information is displayed on helmet mounted displays. In fact, the US Army has similarly explored the application of its IVAS as an option for integrating and presenting information to the combat vehicle mounted infantry. In such an application these embarked troops can not only access the vehicle SA assets while on-board but then smoothly transition to dismounted squad combat. The concept has been demonstrated by Army Stryker units in field experiments.
“BAE Systems BattleView 360 concept”, as Dan Lindell, BAE Systems’ platform manager for the CV90 Infantry Fighting Vehicle explained, “is focused on a way of helping soldiers on the battlefield understand their environment, to quickly identify hazards and react to rapidly evolving scenarios.” It not only provides soldiers with a 360-degree, real-time view outside of their combat vehicles but stitches together a complete picture of the battlefield. It further allows plugging into a computer to digitally collate, map, and classify various features on the battlefield to track their environment. In addition, soldiers can share what they are seeing with other crew members or their commanders.
Future Integration
The objective of developments in the past has been focused on improving the soldier’s ability to see better. Technology advances have pushed this to include aiding vision at night, in low visibility and to detect in wave lengths beyond those of the human eye. Generally, each of these were stand-alone capabilities – such as I2 or thermal – each with its own advantages and weaknesses.
The current direction is towards integrating these into a common fused display. Beyond that is the possibility of capitalising on digitalisation, processing power, and memory storage capabilities to further augment the soldier’s vision. Advances in these fields have the potential to not only detect but to classify and identify what it is that is being viewed. In addition, the seamless sharing in near-real time that image with others will expand the broader awareness at multiple levels. Battlefield ‘vision’ will become no longer an individual task but rather a community effort.
by Stephen W. Miller