Last week Mahmoud Ahmadinejad acknowledged that Iran’s uranium enrichment program had suffered a setback: “They were able to disable on a limited basis some of our centrifuges by software installed in electronic equipment,” the Iranian president told reporters. This was something of an understatement. Iran’s uranium enrichment program appears to have been hobbled for the better part of a year, its technical resources drained and its human resources cast into disarray. The “software” in question was a computer worm called Stuxnet, which is already being viewed as the greatest triumph in the short history of cyberwarfare.

Stuxnet first surfaced on June 17 of this year when a digital security company in Minsk, VirusBlokAda, discovered it on a computer belonging to one of its Iranian clients. It quickly became clear that Stuxnet was not an ordinary piece of malware.

Stuxnet is not a virus, but a worm. Viruses piggyback on programs already resident in a computer. Worms are programs in their own right, which hide within a computer and stealthily propagate themselves onto other machines. After nearly a month of study, cybersecurity engineers determined that Stuxnet was designed to tamper with industrial systems built by the German firm Siemens by overriding their supervisory control and data acquisition (SCADA) protocols. Which is to say that, unlike most malware, which exists to manipulate merely virtual operations, Stuxnet would have real-world consequences: It wanted to commandeer the workings of a large, industrial facility, like a power plant, or a dam, or a factory. Exactly what kind of facility was still a mystery.

From the beginning, everything about Stuxnet was anomalous. Worms that tampered with SCADA are not unheard of, but are exceptionally rare. And as a physical piece of code, Stuxnet was enormous—weighing in at half a megabyte, it dwarfed the average piece of malware by many multiples. Finally, there was its infection radius. Stuxnet found its way onto roughly 100,000 computers worldwide; 60 percent of these were in Iran.

As a work of engineering, Stuxnet’s power and elegance made it even more intriguing. Most industrial systems are run on computers which use Microsoft’s Windows operating system. Hackers constantly probe software for what are known as “zero day” vulnerabilities, weak points in the code never foreseen by the original programmers. On a sophisticated and ubiquitous piece of software such as Windows, discovering even a single zero day vulnerability is extremely uncommon. The makers of Stuxnet found, and utilized, four of them. No one in cybersecurity had ever seen anything like it.

The worm gained initial access to a system through an ordinary USB drive. Picture what happens when you plug a flash drive into your computer. The machine performs a number of tasks automatically; one of them is pulling up icons to be displayed on your screen, representing the data on the drive. On an infected USB drive, Stuxnet exploited this routine to pull the worm onto the computer.

The challenge is that once on the machine, the worm becomes visible to security protocols, which constantly query files looking for malware. To disguise itself, Stuxnet installed what’s called a “rootkit”—a piece of code that intercepts security queries and sends back false “safe” messages, indicating that the worm is innocuous.

But installing a rootkit requires using drivers, of which Windows machines are well trained to be suspicious. Windows requires that all drivers provide verification that they’re on the up-and-up through presentation of a secure digital signature. These digital keys are closely guarded secrets. Stuxnet’s malicious drivers presented genuine signatures from two genuine computer companies, Realtek Semiconductor and JMichron Technologies. Both firms have offices in the same facility, Hsinchu Science Park, in Taiwan. Either by electronic trickery or a brick-and-mortar heist job, the creators of Stuxnet stole these keys​—and in a sophisticated enough manner that no one knew they had been compromised.

So to recap: The security keys enable the drivers, which allow the installation of the rootkit, which hides the worm that was delivered by the corrupt USB drive. Stuxnet’s next job was to propagate itself efficiently but quietly. Whenever another USB drive was inserted into an infected computer, it became infected, too. But in order to reduce traceability, Stuxnet allowed each infected USB drive to pass the worm onto only three computers.

Stuxnet spread in other ways, too. It was not designed to propagate over the Internet at large, but could move across local networks using print spoolers. In any group of computers which shared a printer, when one computer became infected, Stuxnet quickly crawled through the printer to contaminate the others. Once it reached a computer with access to the Internet, it began communicating with command-and-control servers located in Denmark and Malaysia. (Whoever was running the operation took these servers offline after Stuxnet was discovered.) While they were functional, Stuxnet delivered information it had gathered about the systems it had invaded to the servers and requested updated versions of itself. Several different versions of Stuxnet have been isolated, meaning that the programmers were refining the worm, even after it was released.

Finally, there’s the actual payload. Once a resident of a Windows machine, Stuxnet looked for WinCC and PCS 7 SCADA programs. If the machine had neither of these, then Stuxnet merely went about the business of spreading itself. But on computers with one of these two programs, Stuxnet began reprogramming the programmable logic control (PLC) software and making changes in a piece of code called Operational Block 35. For months, no one knew exactly what Stuxnet was looking for with this block of code or what it intended to do once it found it. Three weeks ago, that changed.

As cybersecurity engineer Ralph Langner puts it, Stuxnet was one weapon with two warheads. The first payload was aimed at the Siemens S7-417 controller at Iran’s Bushehr nuclear power plant. The second targeted the Siemens S7-315 controller at the Natanz centrifuge operation, where uranium is processed and enriched. At Bushehr, Stuxnet likely attempted to degrade the facility’s steam turbine, with unknown results. But the attack on Natanz seems to have succeeded brilliantly.

Once again, Stuxnet’s design was unexpectedly elegant. With control of the centrifuge system at Natanz, the worm could have triggered a single, catastrophic incident. Instead, Stuxnet took over the centrifuge’s frequency converters during the course of everyday operation and induced tiny bursts of speed in the machinery, followed by abrupt decelerations. These speed changes stressed the centrifuge’s components. Parts wore out quickly, centrifuges broke mysteriously. The uranium being processed was corrupted. And all the while, Stuxnet kept sending normal feedback to the Iranians, telling them that, from the computer’s standpoint, the system was operating like clockwork. This slow burn went on for a year, with the Iranians becoming increasingly exasperated by what looked like sabotage, and smelled like sabotage, but what their computers assured them was perfectly routine.

In sum, Stuxnet wasted a year’s worth of enrichment efforts at Natanz, ate through centrifuge components and uranium stores, sowed chaos within Iran’s nuclear program, and will likely force Iran to spend another year disinfecting its systems before they can operate at peak levels again. All in all, a successful operation.

Who deserves credit for Stuxnet? There are three possibilities: (1) a lone state actor; (2) a consortium of states; or (3) a private group. Each of these is at first glance plausible. But the exploit was even more complicated than it appears on first inspection.

The planning and implementation of Stuxnet involved three layers of complication. First, there’s the sophistication of the worm itself. Microsoft estimates that the coding of Stuxnet consumed somewhere in the neighborhood of 10,000 man-work days. With a team of 30 to 50 programmers, that’s a year or two of effort, at least. Between the workload, the zero day exploits, and the innovative design of the worm, Stuxnet required not just time but enormous technical sophistication and sizable financial resources.

On the next level, the creators of Stuxnet needed competency in the more traditional cloak-and-dagger elements of espionage. The digital verification certificates had to be stolen from the companies in Taiwan, and the infected USB drives had to be planted on or around the community of people who worked in the Iranian nuclear program—modern espionage tradecraft at its best.

The final complication is that vast amounts of expertise in nuclear engineering were required. It’s not enough to design a worm to infiltrate a nuclear plant—Stuxnet’s creators had to know (1) what parts of the systems to target, (2) the intricacies of the systems’ designs, and (3) how to manipulate the systems to achieve the desired effects. This knowledge base might have been the most difficult to obtain. The world is full of enterprising computer jocks; there are only so many people who understand exactly how centrifuges and nuclear reactors work and the minute complexities of Siemens’s S7-315 and S7-417 control systems. It seems unlikely that a private party—a group of rogue hackers or interested civilians—could amass the requisite competencies in all three of these areas.

So who was it—the Israelis, the United States, Germany, Russia? Some combination of the above? We may never know. Given the scope of the operation, it’s amazing that we understand as much as we already do about Stuxnet. Most prior acts of cyberwarfare took place in the shadows; Stuxnet is the first serious cyberweapon to be caught in the wild by civilians. As a result, we’ve witnessed over the last few months an open-source investigation involving experts in different disciplines from around the world. The techies will continue to push and prod Stuxnet, trying to understand how it worked—and how systems can be protected from a similar attack.

Because, in fundamental ways, cyberwar is no different from real war. Innovations can be copied, and there is always the potential for enemies to turn them to their advantage.

Jonathan V. Last is a senior writer at The Weekly Standard.

Next Page