Abstract: In this article, the author applies conventional conflict diffusion theory to generate two models of cyber conflict diffusion, which are further refined through case study analysis of the 2008 alleged Russian distributed denial of service attack against Georgia and the 2010 alleged U.S./Israel Stuxnet virus infection against Iran. The analysis suggests that conventional and cyber conflict diffusion diverge on two points: third-party intervention (escalation) and collateral damage (pathogen). The findings also raise questions regarding state neutrality, non-state actors, and authenticating attackers. Finally, the author reiterates the importance of network security, advocates studying infrastructure for future research and testing the models through simulations.
Author's Note: The author would like to thank Peter Bodenbach, Maj Denver Braa, Maj Christopher Dinote, William Fielder CISSP, Dr. Christian Jensen, Carlos Paez, Jr., and Dr. Brandon Valeriano for the valuable comments on earlier drafts of this article.
War diffusion is, to be sure, a relatively rare event, but rare does not mean "unimportant." Many statistically rare events are of considerable interest to scientists, particularly when their consequences are either highly lethal or very costly.[1]
Conventional conflict diffusion, or the spread of a conflict from two warring states to neighboring states, is extremely rare: only 94 instances of conflict diffusion occurred out of 3,746 interstate disputes between 1816 and 1965.[2] Although rare, war diffusion is extraordinarily costly in lives and materiel. The seemingly insignificant number of diffusion cases represents 39 percent of the major international wars during the same timeframe. In turn, cases of cyber conflict diffusion are even rarer: the two cases analyzed in this paper may represent the first of their respective types. The first case is the alleged Russian cyber attack against Georgia in 2008, which coincided with a conventional ground attack, making the engagement the first publicly known instance of a cyber attack in conjunction with armed conflict.[3] In this case, the conflict diffused as non-state actors stepped in to aid Georgia, including a U.S. business that acted without the apparent consent of the U.S. Government.[4] The case also suggests that during a cyber conflict, unregulated actions of third party actors have the potential of unintentionally affecting U.S. cyber security policy, including cyber neutrality.
The second case is the alleged Israeli malware attack against Iran's Natanz nuclear facility in 2010. The Stuxnet virus is the first-known virus specifically designed to target real-world infrastructure, such as power stations. Unlike the Russian attack that was launched over global information and communication technology (ICT) infrastructure, the Stuxnet virus was introduced into a closed network, suggesting a human agent physically injected the virus at the facility. Despite this, the virus spread beyond Natanz in similar fashion to a disease pandemic and infected over 100,000 computers worldwide.[5]
However, conventional conflict diffusion literature does not sufficiently address cyber conflict. To address divergences between conventional and cyber conflict diffusion, I analyzed both cases to generate two models of cyber conflict diffusion; specifically, that cyber conflict between actors A and B will spread to actors C through third party intervention (escalation) or collateral damage effects (pathogen). I further propose divergences through three lenses: first, conflict not only involves state-on-state conflict but also embroils non-state actors such as corporations and state-sponsored surrogates. Second, although cyber attacks require extensive planning, cyber conflicts can escalate and propagate rapidly once executed. Third, cyber conflicts can cause damage well outside the immediate conflict area due to global interconnectivity.
Conventional versus Cyber Conflict Diffusion
Conventional conflict diffusion occurs when conflict between two actors (typically states) affects the probability of a similar event occurring in neighboring actors, with World Wars I and II being premier examples.[6] Conflict diffusion is also explained through spatial diffusion, which occurs when new conflict participation initiated by nation A increases (positive spatial diffusion) or decreases (negative spatial diffusion) the likelihood that other nations B will participate in subsequent conflicts: or, the transfer of one state's war behaviors to other states.[7] Siverson and Starr further refine the definition through infection, which occurs when conflicts spread to other states: indeed, observers have compared conflict diffusion to disease epidemics.[8] The likelihood of conventional conflict diffusion increases if the conflicting states are contiguous, if the originating attacker and defender are enduring rivals, and as the duration of conflict increases. Moreover, conventional wars tend to remain contained since initiators target states that will not likely receive third party support. When targets receive third party assistance, the chances of the initiator succeeding fall considerably.[9]
Conventional proximity and interdependence are also useful in framing cyber diffusion. First, proximity in conventional conflict reduces the distance decay associated with projecting combat power.[10] Previous research also finds that nations that share a disputed border or a border with a warring state are nearly five times more likely to also become involved in war than countries without neighboring territorial disputes or that border peaceful states.[11] On the one hand, proximity should be irrelevant in cyber conflict since cyber diffusion transcends contiguous borders (albeit closed networks are more difficult to penetrate). On the other hand, conflict is considerably more likely to occur between proximate rivals with territorial disputes.[12] Thus, cyber conflict should likewise be more likely between proximate or regional rivals despite ICT’s decentralized nature. Second, interdependence suggests that actors are locked into forms of collective dependence that structure their interaction.[13] Through interdependence, outcomes can occur that are unintended yet still shared by all the interdependent actors. [14] Physical interdependence also influences cyber conflict; states vary in susceptibility to cyber attack, as damage may depend on the robustness of their ICT infrastructures.
Diffusion through Two Methods of Cyber Attack
The two cases respectively involve distributed denial of service (DDoS) attacks and malicious software infection, or malware. First, DDoS attacks flood particular Internet sites with more requests for data than can be processed, which effectively shuts down the site and prevents access. Sites important to governance or commerce functions are therefore disrupted until the data flooding is mitigated or the attackers disperse.[15] Such attacks are coordinated through botnets, or a network of computers that have been forcibly hijacked through software infection and then coordinated to launch simultaneous DDoS attacks.[16] Botnet networks are also massive in scale: a 2007 cyber attack against Estonia was launched through over a million unsuspecting botnet computers.[17] The relation between DDoS and diffusion is twofold: first, an aggressor can launch a DDoS attack to and from any location, regardless of border contiguity. In fact, launching through neutral locations is one means by which aggressors disguise the attack origin. Second, DDoS attacks create network congestion that radiates outward from targeted systems. Although ICT data pipelines are generally decentralized, DDoS congestion can still dramatically decrease regional data transmission speeds.[18]
The second technique, malware infection, is the most potent form of cyber attack, and the damage inflicted can be widespread and potentially lethal. Examples of malware include logic bombs, viruses and worms. Logic bombs are programs that cause a system or network to shut down and/or erase all data within that system or network.[19] Next, viruses are programs which attach themselves to existing programs in a network and replicate themselves with the intention of corrupting or modifying files. Finally, worms are essentially the same as viruses, except they do not need to attach themselves to existing programs. All of these methods have the potential of inflicting physical damage: for example, the 2000 ILOVEYOU virus cost over $1 billion in system damages.[20] Malware can also spread very rapidly, as highlighted in Figure 1 below. In contrast, conventional conflict often requires months or years to expand.
FIGURE 1: Example of Malware Diffusion--the 2003 Sapphire Worm
Source: Moore, et al., “The Spread of The Sapphire/Slammer Worm,” 2003. Copyright 2003 The Regents of the University of California. All rights reserved. Used with permission.
Notes: The 2003 Sapphire worm infected approximately 74,000 computer systems across the globe in 30 minutes, with 90 percent of the infections occurring within the first 10 minutes. Although the worm did not carry a malicious payload, the worm’s spread overloaded networks and severally hampered services ranging from ATM transactions to airline scheduling. Given the infection speed, the worm could have been far more damaging had it been designed to carry out more functions besides self-replication.[21]
Despite the severity, malware is only effective if there is a network vulnerability, or security hole, through which malware can enter a system undetected.[22] However, even if network entry points are secure, an individual with physical access to a system can still "air gap," or deliberately install malware onto a system. The relationship of malware to cyber conflict diffusion is its pathogenic nature. Malware can spread beyond systems not connected to open ICT networks, since users can easily transfer malware (purposely or accidentally) on removable media such as Universal Serial Bus (USB) drives.
While conventional diffusion theory is useful and relevant for exploring cyber conflict diffusion, cyber conflict diffusion features component mechanisms that diverge from conventional conflict and thus require new model frameworks. In order to build these new models, I analyze both alleged Russian DDoS attack against Georgia and the alleged U.S./Israeli Stuxnet virus attack against Iran through four narratives: a summary of the attack, attacker authentication, attack damage effects, and the broader outcomes of each. By applying the same format for each case, I can logically clarify similarities and differences between each. The two models and associated component mechanisms were then generated after examining both cases.
Case 1: The Alleged Russian DDoS Attack against Georgia, 2008, Attack Summary
On August 8, 2008, unknown attackers launched a DDoS attacks against Georgian government websites at the same time that Russian forces engaged in conventional combat with Georgian forces. The attack blocked banking, media and government websites, disrupting the flow of information throughout Georgia and to the outside world.[23] The DDoS attacks were accompanied by a cyber blockade that rerouted and blocked all Georgian Internet traffic through Russia.[24] Computers belonging to U.S., Russian, Ukrainian, and Latvian civilians with no connections to the Russian government carried out the attacks, and private citizens were also invited to join in the fight. Russian language websites distributed instructions on how to flood Georgian government websites, and some sites also indicated which target sites were still active and which had collapsed under attacks.[25]
The attacks also drew in third party defenders. The Georgian Ministry of Defense and the President relocated their respective websites to U.S.-based Tulip Systems servers. Soon after, Tulip servers came under DDoS attack, meaning that a private U.S. corporate entity was both attacked by an outside state and also could have been used to undermine United States neutrality through supporting the Georgian government. Google also provided assistance to Georgia’s private business websites, and Computer Emergency Response Teams from Poland and France helped collect Internet log files and analyze Internet Protocol (IP) data from the attacks.[26]
Attacker Authentication
Researchers at Shadowserver (a volunteer group that tracks malicious network activity) assessed the attack command and control server was based in the United States and had actually been online several weeks before the assault.[27] Analysis also indicated another attack domain was located in the United Kingdom under ownership of a user with a Russian (.ru) email address and an Irkutsk, Siberia contact telephone number.[28] Russia has never claimed responsibility; yet, Russia had a vested interest in the attack, and the attack’s sophistication and timing suggested state involvement. Although non-state actors could have planned and executed the attack, it is unlikely due to the resources required unless the actor had significant state sponsorship. Even then, the state could use the supported non-state actor as a front for plausible deniability.
Damage Effects
World Bank statistics indicate that Georgia had seven Internet users per 100 people in 2008, which has since risen to 32 per 100 people. In contrast, Estonia, which experienced a similar attack in 2007, had 57 Internet users per 100 people at the time.[29] The relatively low number of Georgian Internet users in 2008 reflects the nation’s smaller infrastructure capacity and lack of dependence on IT-based infrastructure. Cyber attacks should have less impact on low-density ICT infrastructures than on high-density infrastructures where vital services like transportation, power and banking depend on Internet access.[30] However, Georgia has few cross-border landline connectivity options: in 2008 nearly half of Georgia’s thirteen Internet connections passed through Russia, limiting Georgia's data dispersion options.[31] Georgia’s loss of crucial government websites severed Internet communication in the early days of the conflict, when the government had a vital interest in keeping information flowing to citizens and the international public.[32] Additionally, the National Bank of Georgia ordered all banks to stop offering electronic services for ten days.[33]
Outcomes
The alleged Russian operation against Georgia highlights two additional issues germane to cyber conflict: scalability and temporality.[34] First, scalability refers to the wide variety of effects that a single capability can achieve in cyberspace. In the physical world, capabilities have a limited set of purposes: a tank, a nuclear weapon, and a club all have generally predictable effects. In cyberspace, a single tool can achieve a wide array of effects, making it much harder to predict the scale, let alone how the effects may diffuse. While military analysts can use algorithms to predict physical attacks and effects, it is not easy to identify data flows, determine exactly where they are coming from, or understand the data sender’s intentions.
Next, temporality refers to the instantaneous nature of cyber attacks. The physical world, hampered by friction, gives defenders the benefit of early warning of opponent mobilization efforts. In contrast, uncovering cyber indicators such as botnet arrays, packet sniffers, and network reconnaissance intrusions may indicate some kind of future malice, but not when, how, against whom, the type or severity of the possible impending attack, and for what purposes they will occur. Short- and long-term effect of cyber attacks must also be considered: unlike kinetic force, cyber attacks can be designed to cause only a temporary effect during a particular timeframe.[35] While the attacks did not permanently damage Georgian Internet infrastructure, the temporary damage was most acute at the time when Georgia most needed system access.
Finally, private industry operates the majority of the global Internet system. Thus, even if a state is not a belligerent in a cyber conflict, actions of third party actors have the potential of unintentionally impacting state-level policy, including cyber neutrality, as illustrated by U.S. companies assisting Georgia without knowledge or approval of the U.S. government.[36] Outside retaliation against companies such as Tulip Systems also raises sovereignty questions: how should a state respond to attacks against private actors?
Case 2: The Alleged U.S./Israeli Malware Attack against Iran, 2010, Attack Summary
Global network security firms first identified The W32.Stuxnet worm in June 2010. Stuxnet was primarily targeted industrial control systems (ICS) such as those used at Iran's Natanz nuclear facility. Specifically, the worm was designed to reprogram code on programmable logic controllers (PLCs) while at the same time hiding changes from equipment operators. To increase their odds of success, the worm authors scripted a vast array of software components designed to overcome malware countermeasures.[37] This suggests the authors had detailed knowledge of Siemens’s industrial-production processes and control systems, and access to the Natanz facility’s blueprints. In short, Stuxnet was the work neither of amateur hackers nor cybercriminals, but of a well-financed team.[38] For security reasons SCADA systems are generally not connected to the Internet, or only connected briefly for software and firmware updates. Thus, Stuxnet was designed to spread via infected removable memory plugged into a computer’s removable memory ports. It can also copy itself onto other removable devices and spread across local networks via shared folders and print spoolers, and can even be inserted into an Adobe .pdf file and sent over email. [39] The worm is also written in multiple program languages, meaning it can infect language-specific systems.[40]
Stuxnet is designed to spread aggressively once inside a network. Within several hours of infection, the worm would likely infect systems connected directly or indirectly to compromised computers. The Stuxnet worm is also designed to contact command and control servers over the Internet for new instructions, communicating in plain text to circumvent intrusion monitoring systems searching for program language codes, and then using local peer-to-peer communication to update itself over non-networked systems, again through removable media.[41] Then, if Stuxnet finds the correct PLC model, it starts one of three sequences to inject different code “payloads” into the PLC. The first two are designed to send Iran’s nuclear centrifuges spinning out of control.[42] The second records what normal operations at the nuclear plant looked like, then played those readings back to plant operators so that it would appear that everything was operating normally while the centrifuges were actually tearing themselves apart.[43]
Attacker Authentication
Some versions of the worm struck their targets within twelve hours of being written, indicating that the coders had infiltrated targeted organizations. To do so, the attackers needed to gather detailed intelligence, as each PLC is uniquely configured. Configuration documents may have been stolen by an insider or even retrieved by an early version of Stuxnet or other malware infection. Once attackers had the design documents and potential knowledge of the facility's computing environment, they would develop a new version of Stuxnet.[44] Attackers would also need to setup an experimental environment in order to test their code. The full cycle may have taken six months and five to ten core developers, as well as numerous support personnel.[45]
But, what potentially links the worm to the U.S. or Israel? Security analysts found the number string 19790509 in a specific Windows registry key. The value may be a random string and represent nothing; but if read in date format, the value may be May 9, 1979. On May 9, 1979, Iranian Jew Habib Elghanian was executed by a firing squad in Tehran--the first Jew and one of the first civilians to be executed by the new Islamic government, prompting the mass exodus of Jews from Iran.[46] Analysts also found the following project path inside the worm’s driver file: b:\myrtus\src\objfre_w2k_x86\i386 \guava.pdb. Guavas are plants in the myrtle, or myrtus, family. The string could have also no significant meaning; however, Myrtus could be “MyRTUs.”RTU stands for remote terminal unit, which is a synonym for PLC. In addition, myrtle is the name Hadassah in Hebrew, and Hadassah can also be Esther. In the Torah, Esther learned of a Persian plot to assassinate the king. With this foreknowledge, the Jews then led a preemptive strike against the Persians to prevent the assassination.[47] Finally, in June 2012 New York Times reporter David Sanger published an investigative article in which he used anonymous interviews with senior U.S. officials to piece together U.S. and Israeli involvement.[48] While the evidence corroborates with reports by Symantec, ISIS and Tofino Security, neither the U.S. nor Israeli have made official statements confirming or denying involvement in the attack. Symantec Corporation also cautions against drawing attribution conclusions, given that attackers would have the natural desire to implicate other parties.[49]
Damage Effects
Although the worm did not halt Iran's uranium enrichment program, it did slow the program considerably, potentially destroying over 1,000 of Iran's 8,528 centrifuges.[50] Moreover, Stuxnet infected machines well outside the target area. As of September 2010, data from Microsoft and Symantec identified approximately 100,000 infected systems worldwide, with 60 percent of the infected machines in Iran, 18 percent in Indonesia and 8 percent in India, with additional infections detailed in Figure 2 below.[51] Unfortunately, Stuxnet is now available for global analysis and offers a premier template for modification and attack against other industrial targets.[52]
FIGURE 2: Geographic Distribution of Stuxnet Infections
Source: Falliere, et al., “W32.Stuxnet Dossier,” 6. Copyright 2011 Symantec Corporation. All rights reserved. Used with permission.
Outcomes
Stuxnet is considered one of the most complex and well-engineered worms ever captured--unparalleled to any previous malware, according to Internet security company Kaspersky Lab--and is the first-known malware specifically designed to target real-world infrastructure.[53] From this, numerous cyber experts assessed that Stuxnet was a state-level attack: even if planned and executed by private hackers, other cyber experts argue that only states have the resources to hire professionals and overwhelm other states.[54] In contrast to overt military strikes, there is an appeal to cyber attacks aimed at a centrifuge plant built with illegally obtained foreign equipment, and operating in defiance of United Nations Security Council resolutions.[55] Even if U.S. and Israeli hands are clean, the worm's effects played into their favor with minimal material cost. In regards to diffusion, the highly specific Stuxnet virus's rapid global spread suggests that even a cyber weapon with extraordinary targeting capability cannot be easily controlled.
Two Models of Cyber Conflict Diffusion: Escalation and Pathogen
First, the alleged Russia-Georgia case escalated from a long-term rivalry to multi-state and non-state actor involvement. Russia and Georgia are considered enduring rivals: Russia and Georgia have engaged in more than 18 militarized interstate disputes since the 1991 collapse of the Soviet Union, exceeding Diehl and Goertz’s (2000) threshold of six MIDs for an enduring rivalry.[56] Although the conflict was notionally a dyadic conflict, the attack against Georgia was launched over open infrastructure across both Russia’s and Georgia’s mutual border and from international locations. At the same time, third parties stepped in to support Georgia, resulting in a cycle of attacks and retaliations from and against third parties outside the target area. I illustrate the escalation process in Figure 3.
Figure 3: Escalation Mechanism
In the Iran case, the attack had to first overcome Natanz's closed-loop infrastructure; however, the worm spread rapidly and resulted in neutral-state infections far outside the target area, referred to as pathogen and illustrate below in Figure 4. Diffusion within this case is the spread of the conflict outside of the target, but against targets that are unwitting and unknowing participants. The Iran case also suggests that the target proximity is not necessary as long as the attacker can identify exploitable vulnerabilities. However, pathogen also allows infected actors to harvest, analyze, inoculate against, and potentially reengineer malware, which can then be launched in retaliation. While Figure 4 only illustrates direct retaliation by the defender B against attacker A, any infected state or non-state actor C with the properly trained personnel can theoretically isolate malware packages for their own arsenals.
Figure 4: Pathogen Mechanism
In addition to escalation and pathogen, cyber conflict both conforms to and diverges from conventional diffusion on conventional war, rivalry, proximity, and participation:
- Conventional War: Russia allegedly launched the cyber attack against Georgia concurrently with a conventional attack; indeed, the timing is a major factor indicating Russian involvement. In contrast, the U.S. and Israel allegedly launched the Stuxnet virus with no warning, although verbal escalation over Iran's nuclear program may serve as a proxy. Still, conventional war is likely not a necessary condition for cyber war.
- Rivalry: Russia and Georgia are enduring rivals and thus considerably more likely to engage in conflict due to past history of conflict. Per this definition, the U.S. and Israel are not rivals with Iran. However, the U.S./Israel and Iran have engaged in heated rhetoric since Iran's 1979 revolution, to include Iran's refusal to accept Israel as a state, lack of U.S.-Iran diplomatic relations, and uncertainty over Iran's nuclear program. While definitional differences exist between enduring and verbal rivals, rivalry appears to be a necessary condition for state-sponsored cyber conflict.
- Proximity: conventional literature finds that conflict is significantly more likely between contiguous states with territorial disputes. Thus, conventional conflicts are most likely to diffuse regionally. Both cases suggests that, from a technological perspective, border contiguity is not necessary for cyber conflict, although targeting a closed network requires significantly more planning than launching an attack over an open network.
-
Participation: while states arguably remain the only actors powerful enough to launch sophisticated and sustained cyber attacks, the same tools and are also available to non-state actors, down to individual users. Thus, while states may initiate a large-scale attack, non-state third party actors can join and escalate the conflict: in turn, states can retaliate against non-state actors. Additionally, cyber conflict also collaterally spread through pathogen.
Conclusions and Way Ahead
Cyber attacks are not new phenomena: previous attacks have resulted in billions of dollars in theft and damage. Millions of malware programs have been written, with thousands of new programs written every day. However, there has yet to be a catastrophic failure that plunges states into chaos. Nonetheless, the cases may be harbingers of future events. For the first time, a state may have simultaneously launched a cyber attack against an opponent's network as tanks crossed the opponent's border. For the first time, a state may have deliberately launched a cyber attack against a specific weapons system inside another state. Moreover, cyber attacks can draw third-party actors into the fray, state and non-state actor alike, either through overt participation or unintended infection. The cases also raise additional questions regarding collateral damage, actor involvement and attacker authentication.
Collateral Ramifications
The conventional diffusion literature contends that diffused wars, while rare, are highly destructive. To that end, what are the collateral ramifications of cyber conflict diffusion? Indeed, cyber attack effects inside one state can ripple across regions, let alone across the globe. Cyber attacks can also result in severe physical damage. For example, while the Stuxnet infection damaged centrifuges, could it also have resulted in uranium leaks? This question is not without precedent: in 2003 the "Slammer" computer worm infected thousands of ICT systems globally and shut down an Ohio nuclear power plant’s safety monitoring systems for five hours.[57] While cyber damage theoretically ranges from nuisance to lethality, even nuisance may inspire retaliation. Indeed, in 2010 former CIA director Michael Hayden recommended that attacks against finance and power systems should be the minimum threshold for acts of war, as well as any DDoS attacks regardless of target; i.e., a DDoS attack against a small, private corporate website would be treated the same as a DDoS attack against the Pentagon.[58]
State and Non-State Actors
Attack anonymity and network security asymmetries imply that smaller, less powerful actors--both state and non-state--have more opportunity in cyberspace than in the physical world. But, sustained cyber power depends on controlling resources, such as controlling infrastructure, building networks, coding software, and deploying human talent. Moreover, while cyber escalation and pathogen may occur rapidly vis-à-vis conventional diffusion, both cases suggest that significant planning was necessary prior to attack execution. For example, Sanger suggests that the U.S. and Israel initiated Stuxnet virus planning in 2006 under the title Olympic Games.[59]
Yet, while sophistication of each case suggests state-level involvement, non-state entities such as corporations, criminal organizations, and sophisticated hacker collectives (i.e. Anonymous and LulzSec) have the wherewithal to initiate or augment major cyber attacks. What remains debatable is whether or not non-state actors can sustain operations for extended periods. States could provide non-state actors the means to initiate and sustain attacks as surrogates. But, if states provide funding and know-how in the cyber realm, can state-sponsored cyber attacks still be broadly considered as state-on-state attacks, albeit launched over power lines? At the same time, release of malware “into the wild” raises the concern that third parties will catch the bullet and return fire to the attacker, with retaliation not necessarily originating from states.
Authentication
Despite evidence that both cases represent state-sponsored cyber conflict, no smoking guns conclusively prove Russia launched an attack on Georgia, or that the U.S. or Israel launched an attack on Iran. Indeed, no cyber attack has ever been publicly traced to a government. For example, in 2004 former Secretary of the Air Force Thomas C. Reed published At the Abyss: an Insider’s History of the Cold War, in which he claimed U.S. operatives deliberately inserted malware into oil pipeline equipment destined for the Soviet Union. The malware dramatically altered oil pipeline settings, leading to one of the largest non-nuclear explosions ever recorded. [60] Still, the veracity of this event has been called into question, with no official support from U.S. agencies and former Soviet agents calling Reed’s report “rubbish.”[61] More recently, in June 2012 Deputy White House Press Secretary Josh Earnest said he was "not able to comment on any of the specifics or details,” in David Sanger’s NY Times report on the virus.[62] While the two cases present sophistication likely beyond the reach of a smart but underfunded hacking group, we may never know conclusively who launched the above attacks, let alone who launches future attacks.
Recommendations and Future Research
The rapidity in which cyber attacks diffuse, along with their potential damage outcomes, requires actionable rather than theoretical recommendations. The outcomes for both cases suggest that the best defense against cyber conflict diffusion effects is to enact, execute and periodically review network security measures. First, cyber attacks can only succeed and spread if the target and collateral systems are vulnerable. Thus, the simplest method of defense is proactive network security, requiring a government-business partnership to share malware infection data and remediation solutions. Plugging security holes may be costly if enterprises have no comprehensive security and contingency plan in place. Additionally, network security requires that users remain diligent in following sound network security practices (consider the theoretical Iranian nuclear plant worker who may have infected the world via a single USB drive). The consequences of network security negligence are great: malware requires only a single exploitable vulnerability to infect a system, and the cases above should offer sufficient negative outcome examples. To paraphrase a common quip in the cyber security field, a network defender must succeed 100 percent of the time. The aggressor only needs to succeed once.
As for research programs, I did not fully explore ICT infrastructure in this paper. While the proposed escalation and pathogen models posit that border contiguity is not a necessary condition for cyber conflict diffusion, the cases also suggest that ICT infrastructure density may play a significant role in how cyber attacks spread and how much damage they inflict. Thus, the study of ICT density, particularly between enduring rivals, may prove fruitful for forecasting future cyber conflicts and their potential collateral effects. The same study can also be used to understand how non-state actors respond to cyber conflict, either through participation or collateral infection. Next, I recommend using simulations based on positive cyber-spatial diffusion's theoretical framework and component mechanisms, given major cyber conflict's rarity. Model and component mechanism simulation may synergize well with ICT density studies. Ultimately, while I hope these models prove useful for understanding and studying cyber conflict, the proposed cyber conflict diffusion escalation and pathogen models and component mechanisms should not be construed as the final theoretical word. Given the extraordinary risks and outcomes associated with cyber conflict, I also hope and expect, then, that future research will uncover additional insights and actionable information.
[1] Randolph M. Siverson and Harvey Starr. "Opportunity, Willingness, and the Diffusion of War." The American Political Science Review 84, no. 1 (1990): 54.
[2] ibid.
[3] William C. Ashmore, USA. "Impact of Alleged Russian Cyber Attacks." United States Army Command and General Staff College, 2009, 12.
[4] Will Goodman. 2010. "Cyber Deterrence Tougher in Theory than in Practice?" Strategic Studies Quarterly, Fall (2010): 115.
[5] James P. Farwell and Rafal Rohozinski. "Stuxnet and the Future of Cyber War." Survival 53, no. 1 (2011): 23.
[6] Benjamin A. Most and Harvey Starr. "Theoretical and Logical Issues in the Study of International Diffusion." Journal of Theoretical Politics 2, no. 4 (1990): 398; Michael P. Colaresi, Karen Rasler, and William R. Thompson. Strategic Rivalries in World Politics: Position, Space, and Conflict Escalation. Cambridge: Cambridge University Press, 2008; Randolph M. Siverson, and Harvey Starr. The Diffusion of War: A Study of Opportunity and Willingness. Ann Arbor: The University of Michigan Press, 1991, 53.
[7] Benjamin A. Most and Harvey Starr. "Diffusion, Reinforcement, Geopolitics, and the Spread of War." The American Political Science Review 74, no. 4 (1980): 933.
[8] Siverson and Starr, “Willingness, and the Diffusion of War,” 8; Most and Starr, “Diffusion, Reinforcement, Geopolitics, and the Spread of War,” 54
[9] Scott Sigmund Gartner, and Randolph M. Siverson. "War Expansion and War Outcome." The Journal of Conflict Resolution 40 (1996): 14; Brandon Valeriano, and John A. Vasquez. “Identifying and Classifying Complex Interstate Wars.” International Studies Quarterly 54, no.2 (2010): 561–582.
[10] Bruce Bueno de Mesquita. 1981. The War Trap. New Haven: Yale University Press, 20; John A. Vasquez. "Why Do Neighbors Fight? Proximity, Interaction, or Territoriality." Journal of Peace Research 32, no. 3 (1995): 280; Kristian Gleditsch. All International Politics Is Local. Ann Arbor: The University of Michigan Press, 2002, 6.
[11] Most and Starr 1980, 942; Douglas M. Gibler. “Bordering on Peace: Democracy, Territorial Issues, and Conflict." International Studies Quarterly 51, no. 3 (2007): 529; Paul D. Senese. “Territory, Contiguity, and International Conflict: Assessing a New Joint Explanation.” American Journal of Political Science 49, No. 4 (2005).
[12] Brandon Valeriano and Ryan Maness. "Persistent Enemies and Cyberwar: Rivalry Relations in an Age of Information Warfare." In Cybersecurity to Cyberwar Workshop. U.S. Naval War College, 2010.
[13] David A. Lake and Patrick Morgan, ed. Regional Orders: Building Security in a New World. State College, P.A.: Pennsylvania State University Press, 1997.; Gleditsch, All International Politics Is Local, 4
[14] Benjamin A. Most and Harvey Starr. Inquiry, Logic, and International Politics. Columbia, S.C.: University of South Carolina Press, 1989, 30.; Gleditsch, All International Politics Is Local, 5; Senese, “Territory, Contiguity, and International Conflict,” 778
[15] Valeriano and Maness, “Persistent Enemies and Cyberwar: Rivalry Relations in an Age of Information Warfare,” 4
[16] Richard A. Clark and Robert K. Knake. Cyber War: The Next Threat to National Security and What to Do about It. New York, N.Y.: Harper Collins, Inc., 2010, 282.
[17] Bruce D. Caulkins. "Proactive Self Defense in Cyberspace." Carlisle Barracks, P.A.: U.S. Army War College, 2009, 14.
[18] Qijun Gu, Peng Liu, and Chao-Hsien Chu. "Analysis of Area-Congestion-Based DDoS Attacks in Ad Hoc Networks." Ad Hoc Networks 5, no. 5 (2007): 613-25.
[19] Clark and Knacke, The Next Threat to National Security and What to Do about It, 287
[20] Ronald J. Deibert and Janice Gross Stein. "Hacking Networks of Terror." Dialogue-IO 1, no. 1 (2002): 11; Valeriano and Maness, "Persistent Enemies and Cyberwar: Rivalry Relations in an Age of Information Warfare,” 4
[21] Moore, et al., “The Spread of The Sapphire/Slammer Worm,” 2003.
[22] Goodman, “Cyber Deterrence Tougher in Theory Than in Practice,” 117
[23] Ashmore, “Impact of Alleged Russian Cyber Attacks,” 12
[24] Beidleman, “Defining and Deterring Cyber War,” 5
[25] Marching Off to Cyberwar." The Economist, 4 December 2008; Bradley L. Boyd. "Cyber Warfare: Armageddon in a Teacup?" Fort Leavenworth, K.S.: U.S. Army Command and General Staff College, 2009, 54; Goodman, “Cyber Deterrence Tougher in Theory Than in Practice,” 16
[26] Eneken Tik, Kadri Kaska, Kristel Rünnimeri, Mari Kert, Anna-Maria Talihärm, and Liis Vihul. "Cyber Attacks against Georgia: Legal Lessons Identified, Ver. 1.0." Tallin, Estonia: Cooperative Cyber Defence Center of Excellence, 2008, 15.; Goodman, “Cyber Deterrence Tougher in Theory Than in Practice,” 115
[27] John Markoff. "Before the Gunfire, Cyberattacks." New York Times, 12 August 2008.
[28] Maria José Rios, Sérgio Tenreiro de Magalhães, Leonel Santos, and Hamid Jahankhani. "The Georgia’s Cyberwar. Global Security, Safety, and Sustainability." Paper presented at the 5th International Conference, ICGS3, 2009.
[29] Information and Communications for Development (I4CD). "World Bank Data Query." edited by World Bank Group. Washington, D.C.: World Bank Group, 2011.
[30] Markoff, "Before the Gunfire, Cyberattacks."
[31] Tikk, et al., “"Cyber Attacks against Georgia,” 6
[32] Ibid, 15.
[33] Beidleman, “Defining and Deterring Cyber War,” 8
[34] Goodman, “Deterrence Tougher in Theory Than in Practice,” 116
[35] Tikk, et al., “Cyber Attacks against Georgia,” 16
[36] Stephen W. Korns and Joshua E. Kastenberg. "Georgia’s Cyber Left Hook." Parameters, 2009, 61.
[37] Nicholas Falliere, Liam O Murchu, and Eric Chien. "W32.Stuxnet Dossier Version 1.4." Mountain View, C.A. : Symantec Corporation, 2011, 1.
[38] A Worm in the Centrifuge." The Economist, 30 September 2010.; Farwell and Rohozinski, “Stuxnet and the Future of Cyber War,” 26
[39] Eric Byers, Andrew Ginter, and Joel Langill. 2011. How Stuxnet Spreads--A Study of Infection Paths in Best Practice Systems. Lantzville, B.C. Tofino Security, 7; Economist , “A Worm in the Centrifuge”
[40] Thomas N. Chen. "Stuxnet, the Real Start of Cyber Warfare?" IEEE Network November/December (2010): 3.
[41] Byers, et al., “How Stuxnet Spreads,” 7
[42] David Albright, Paul Brennan, and Christina Walrond. 2010. "Did Stuxnet Take Out 1,000 Centrifuges at the Natanz Enrichment Plant?" ISIS Report, 22 December. Institute for Science and International Security, 4.
[43] Albright, et al., “Did Stuxnet Take Out 1,000 Centrifuges at the Natanz Enrichment Plant,” 4; William J. Broad, John Markoff, and David E. Sanger. "Israeli Test on Worm Called Crucial in Nuclear Delay." New York Times, 15 January 2011; Byers, et al., “How Stuxnet Spreads,” 8
[44] Falliere, et al., “W32.Stuxnet Dossier,” 18
[45] Ibid, 24
[46] Ibid, “W32.Stuxnet Dossier,” 3
[47] Ibid, 3
[48] David E. Sanger. “Obama Ordered Sped Up Wave of Cyberattacks Against Iran.” New York Times, 1 June 2012, A1.
[49] Falliere, et al., “W32.Stuxnet Dossier,” 18
[50] Director General Report. "Implementation of the NPT Safeguards Agreement and Relevant Provisions of Security Council Resolutions 1737 (2006), 1747 (2007), 1803 (2008) and 1835 (2008) in the Islamic Republic of Iran." International Atomic Energy Agency, 2010, 1; Albright, et al., “Did Stuxnet Take Out 1,000 Centrifuges at the Natanz Enrichment Plant,” 9
[51] Economist, “A Worm in the Centrifuge”; Falliere, et al., “W32.Stuxnet Dossier,” 5
[52] Broad, et al., “Israeli Test on Worm Called Crucial in Nuclear Delay”; Chen, “Stuxnet, the Real Start of Cyber Warfare,” 3; Farwell and Rohozinski, “Stuxnet and the Future of Cyber War,” 34; John P. Mello. “Stuxnet is Dead, Long Live Stuxnet.” TechNewsWorld, 09 July 2012.
[53] Byers, et al., “How Stuxnet Spreads,” 6; Chen, “Stuxnet, the Real Start of Cyber Warfare,” 2; Jonathan Fildes. "Stuxnet Virus Targets and Spread Revealed." BBC News, 15 February 2011.
[54] Neild, “Does Stuxnet Herald the Age of Cyber Warfare”; Tim Weber. 2010. "Davos 2010: Cyber Threats Escalate With State Attacks." BBC News, 10 January.
[55] Albright, et al., “Did Stuxnet Take Out 1,000 Centrifuges at the Natanz Enrichment Plant,” 7
[56] Paul F. Diehl and Gary Goertz. War and Peace in International Rivalry. Ann Arbor: University of Michigan Press, 2000, 44; Correlates of War (COW) Project. 2010. “COW dataset, v4.0 1816-2007.” Available at http://correlatesofwar.org (accessed 30 July 2012).
[57] Scott W. Beidleman. "Defining and Deterring Cyber War." Carlisle Barracks, PA: U.S. Army War College, 2009, 8; David Moore, Vern Paxson, Stefan Savage, Colleen Shannon, Stuart Staniford, and Nicholas Weaver. “The Spread of the Sapphire/Slammer Worm.” The Cooperative Association for Internet Data Analysis (CAIDA), 2003. Available at http://www.caida.org/publications/papers/2003/sapphire/sapphire.html (accessed 30 July 2012).
[58] Declan McCullagh. 2010. "U.S. Military Cyberwar: What's Off-Limits?" CNET News, 29 July.
[59] Sanger, “Cyber Attacks against Iran,” para. 16.
[60] Matthew French. “Tech Sabotage During the Cold War.” Federal Computer Week, 26 April 2004; Barry Neild. "Does Stuxnet Herald the Age of Cyber Warfare?" Global Post, 18 October 2010.
[61] Anatoly Medetsky. “KGB Veteran Denies CIA Caused '82 Blast.” The Moscow Times, 18 March 2004.
[62] Chloe Albanesius. “Stuxnet Worm Crafted by U.S., Israel to Thwart Iran's Nuclear Program.” PC Mag.com, 1 June 2012, para. 13. Available at http://www.pcmag.com/article2/0,2817,2405191,00.asp (accessed 4 February 2013).