Security Matters: Malware Jumps the Air Gap

Power plant photo

During the summer and fall of 2010, a quiet cyberwar erupted between Iran on one side and the United States and Israel on the other. The weapon, a computer virus known as Stuxnet, allegedly breached the networks of a sensitive nuclear fuel enrichment facility in Iran and caused an incredible amount of physical damage to the equipment contained within the plant. Reports are that the malware attack dealt a significant blow to the Iranian nuclear program and, although they never officially took responsibility, the attacking nations coyly suggested their involvement in the attack.

 

The Stuxnet incident fascinated technologists around the world for a number of reasons. In addition to being one of the first large-scale cyberwarfare attacks in history, Stuxnet also made several technical advances over existing malware. The virus creators carefully crafted Stuxnet to target the enrichment equipment at a facility in Natanz, Iran and designed it to covertly infiltrate the Natanz network, defeating existing security controls. This included beating air gap protection, a control typically found in sensitive industrial facilities.

 

What is Air Gap Protection?

 

Air gap protection, found in many critical computing environments, stands as one of the gold standard practices in cybersecurity. The control requires completely disconnecting sensitive systems from any public network. There is no firewall, no gateway, no filtering blocking traffic between the Internet and the sensitive network. Rather, there is absolutely no path between the sensitive network and the public network. There's nowhere to apply filtering because no connection at all exists.

 

The theory is that it will be especially difficult, if not impossible, for an attacker to compromise a network that is not reachable from any public network. If implemented properly, breaking into an air gapped network requires physically entering the facility and obtaining a wired connection to the network. Facilities making use of air gap protection typically supplement that network security control with armed guards and other defenses designed to prevent unauthorized physical access to the facility.

 

Implementing air gap protection is not easy. Completely disconnecting a network from the Internet does make it difficult for attackers to gain access but it also makes it quite difficult for system administrators and users to do their jobs. Need to check the weather forecast or pull down a reference guide for a piece of hardware? Sorry, can't do that from the air gapped network. Users need to use a completely separate computer, sometimes in a different room, that connects to the Internet.

 

Need to download and apply a security update? While Internet-connected systems can simply pull updates from the manufacturer's update servers, there's no such option on an air gapped network. Instead, administrators typically configure an update server on the air gapped network and then hand carry USB sticks from an Internet-connected system to the update server, which may then distribute them to other systems on the air gapped network. Definitely not an easy way to get the job done!

 

The difficulty of building and maintaining air gapped networks means that you won't find them in many places. Only the most sensitive facilities implement air gaps and only do so when the stakes are extremely high. You might find air gapped networks, for example, protecting medical equipment in a hospital, controlling sensitive manufacturing processes, controlling aircraft computer systems or operating dangerous weapons systems. They simply aren't worth the overhead in less sensitive environments.

 

Making the Initial Leap

 

For years, computer security experts considered air gap protection virtually undefeatable. As long as nobody built a network connection that defeated the air gap, the systems would remain secure from attack. Indeed, administrators often overlooked other security controls on air gapped networks, figuring that antivirus updates and operating system security patches were superfluous on systems that threats couldn't reach in the first place.

 

Along came Stuxnet. This virus jumped the air gap and served as a wakeup call, not only to the Iranian nuclear program, but also to security professionals around the world concerned about air gap protection. Analysts believe that Stuxnet jumped the air gap using a very simple technology: USB sticks. Remember, anyone seeking to introduce data onto an air gapped network must physically carry it on removable media from the network of origin to the air gapped network.

 

Most people believe that Stuxnet spread into the Natanz facility during that type of data transfer. An Iranian with access to the air gapped network likely unknowingly carried an infected USB drive into the facility and inserted it into a system on the air gapped network. Stuxnet quickly realized that it had finally penetrated the target network and carried out its carefully crafted instructions designed specifically to destroy equipment at Natanz.

 

Communicating Across an Air Gap

 

Designing Stuxnet required a significant investment of time, money and information. Whoever put it together had very precise knowledge of the equipment at Natanz and was able to code the deadly instructions without entering that facility. This likely required months of careful effort and a sophisticated testing facility containing difficult-to-obtain equipment. This elaborate effort was critical because the designers only had one chance to get Stuxnet correct.

 

Once the virus was loose on the air gapped network, the attackers were blinded to its activity. The virus could not communicate back across the air gap and report success or failure. Similarly, it could not receive new instructions from its designers. Until recently, crossing the air gap was a one-way operation. Attackers who were both skilled and lucky might be able to introduce malware onto an air gapped network, but they were unable to control the malware after it crossed or exfiltrate information from a compromised network.

 

German computer scientists recently published a research paper detailing an approach that potentially builds a two-way communication link over an air gapped network. How does it work? Computers on the air gapped network and computers on public networks talk to each other, literally. Once the virus compromises systems on both networks, it commandeers the speakers and microphones on infected systems and "talks" across the air gap using frequencies that are inaudible to the human ear.

 

The systems might not be connected on a network, but they can speak to each other using this technique. That's a very dangerous potential enhancement to air gap jumping malware!

 

Preserving the Air Gap

 

Security guard concept

What does this mean for organizations that make use of air gapping as a security control? First, they should remain assured that air gapping is still a best practice for extremely sensitive systems. There is no evidence that anyone has yet successfully built a two-way communications channel across an air gap in a real-world attack, but that doesn't mean it hasn't occurred undetected.

 

The bottom line is that while an air gap is a strong security control, cybersecurity professionals should never depend upon air gapping, or any security control for that matter, as a panacea. The defense-in-depth approach to computer security requires that defenders build a robust series of overlapping controls that step in when other controls fail.

 

Administrators of air gapped networks should apply patches, update antivirus software and take all of the security actions they'd take on publicly-connected systems to reduce the likelihood that a virus jumping the air gap will successfully infect systems. Similarly, administrators should carefully protect any systems located on the public network that are in close physical proximity to the air gapped network. This may prevent them from serving as the bridge across the air gap.

 

MORE HISTORIC HACKS
Would you like more insight into the history of hacking? Check out Calvin's other articles about historical hackery:
About the Author

Mike Chapple is Senior Director for IT Service Delivery at the University of Notre Dame. Mike is CISSP certified and holds bachelor’s and doctoral degrees in computer science and engineering from Notre Dame, with a master’s degree in computer science from the University of Idaho and an MBA from Auburn University.