Strategic Defense Initiative [SDI] I

President Ronald Reagan’s announcement of the Strategic Defense Initiative on 23 March 1983 marked an even more explicit bid to use US technology to compete with the Soviet Union. As he put it:

Let us turn to the very strengths in technology that spawned our great industrial base and that have given us the quality of life we enjoy today.

What if free people could live secure in the knowledge that their security did not rest upon the threat of instant US retaliation to deter a Soviet attack, that we could intercept and destroy strategic ballistic missiles before they reached our own soil or that of our allies?

I call upon the scientific community in our country, those who gave us nuclear weapons, to turn their great talents now to the cause of mankind and world peace, to give us the means of rendering these nuclear weapons impotent and obsolete.

The US National Intelligence Council assessed that the Soviet Union would encounter difficulties in developing and deploying countermeasures to SDI (Strategic Defense Initiative). As one September 1983 memorandum put it,

They are likely to encounter technical and manufacturing problems in developing and deploying more advanced systems. If they attempted to deploy new advanced systems not presently planned, while continuing their overall planned force modernization, significant additional levels of spending would be required. This would place substantial additional pressures on the Soviet economy and confront the leadership with difficult policy choices.

SDI was announced in March 1983 by President Ronald Reagan as a plan for a system to defend against nuclear weapons delivered by ICBM (INTERCONTINENTAL BALLISTIC MISSILE). As planned, SDI would constitute an array of space-based vehicles that would destroy incoming missiles in the suborbital phase of attack.

The plan was controversial on three broad fronts. First, the Soviet Union, at the time the world’s other great nuclear superpower, saw SDI as a violation of the 1972 SALT I Treaty on the Limitation of Anti-Ballistic Missile Systems and there- fore an upset to the balance of power. Second, proponents of the policy of mutually assured destruction (“MAD”), who saw the policy as the chief deterrent to nuclear war, criticized SDI as a means of making nuclear war appear as a viable strategic alternative. Third, a great many scientists and others believed SDI was far too complex and expensive to work. These critics dubbed the “futuristic” program “Star Wars,” after the popular science fiction movie, and the label was widely adopted by the media.

Indeed, the technical problems involved in SDI were daunting. Multiple incoming missiles, which could be equipped with a variety of decoy devices, had to be detected and intercepted in space. Even those friendly to the project likened this to “shooting a bullet with a bullet.” Congress, unpersuaded, refused to grant funding for the full SDI program, although modified and spin-off programs consumed billions of dollars in development.

The `Star Wars’ programme or Strategic Defense Initiative (SDI), outlined by Reagan in a speech on 23 March 1983, was designed to enable the USA to dominate space, using space-mounted weapons to destroy Soviet satellites and missiles. It was not clear that the technology would work, in part because of the possible Soviet use of devices and techniques to confuse interceptor missiles. Indeed, Gorbachev was to support the Soviet army in claiming that the SDI could be countered. 12 However, the programme was also a product of the financial, technological and economic capabilities of the USA, and thus highlighted the contrast in each respect with the Soviet Union. The Soviets were not capable of matching the American effort, in part because they proved far less successful in developing electronics and computing and in applying them in challenging environments. Effective in heavy industry, although the many tanks produced had pretty crude driving mechanisms by Western standards, the Soviet Union failed to match such advances in electronics. Moreover, the shift in weaponry from traditional engineering to electronics, alongside the development of control systems dependent on the latter, saw a clear correlation between technology, industrial capacity, and military capability. It was in the 1980s that the Soviet Union fell behind notably. In 1986, an American interceptor rocket fired from Guam hit a mock missile warhead dead-on. This test encouraged the Soviets to negotiate.

The collapse of the Soviet Union beginning in 1989 seemed to many to render SDI a moot point-although others pointed out that a Russian arsenal still existed and that other nations had or were developing missiles of intercontinental range. There were, during the early 1990s, accusations and admissions that favorable results of some SDI tests had been faked, and former secretary of defense Caspar Weinberger asserted that while the SDI program had failed to produce practical weapons and had cost a fortune, its very existence forced the Soviet Union to spend itself into bankruptcy. In this sense, SDI might be seen as the most effective weapon of the cold war. In the administration of George W. Bush, beginning in 2001, SDI was revived, and the USAF resumed development and testing of components of the system.

Strategic Defense Initiative Organization (SDIO)

SDI’s formal beginnings date from NSDD 119 signed by President Reagan on January 6, 1984 and placed the program under DOD’s leadership. Key elements of this document reflecting SDI’s raison d’etre include DOD managing the program and the SDI program manager reporting directly to the secretary of Defense, SDI placing primary emphasis on technologies involving nonnuclear components, and research continuing on nuclear-based strategic defense concepts as a hedge against a Soviet ABM breakout (Feycock 2006, 216).

On March 27, 1984, Secretary of Defense Casper Weinberger (1917-2006) appointed Air Force lieutenant general James Abrahamson (1933-) as the first director of the Strategic Defense Initiative Organization (SDIO), which was given responsibility for developing SDI. Weinberger signed the SDIO charter on April 24, 1984, giving Abrahamson extensive freedom in managing the program (Federation of American Scientists n. d., 5).

A May 7, 1984 memorandum from Deputy Secretary of Defense William H. Taft IV (1945-) to the secretary of the Air Force provided additional direction and guidance on the mission and program management of SDI’s boost and space surveillance tracking systems. SDI attributes mandated in this document included the ability to provide ballistic missile TW/AA; satellite attack warning/verification (SAW/V); satellite targeting for U. S. ASAT operations; and SDI surveillance, acquisition, tracking and kill assessment SATKA. Additional program mandates included program plans showing specific requirements, critical milestones, and costs along with alternative means of achieving these objectives (Spires 2004, 2:1130-1131).

SDIO was organized into five program areas covering SATKA, Directed Energy Weapons (DEW) Technology, Kinetic Energy Weapons (KEW) Technology, Systems Concept/Battle Management (SC/BM), and Survivability, Lethality, and Key Technologies (SLKT). SATKA program objectives included investigating sensing technologies capable of providing information to activate defense systems, conduct battle management, and assess force status before and during military engagements. A key SATKA challenge was developing the ability to discriminate among hostile warheads, decoys, and chaff during midcourse and early terminal phases of their trajectories (DiMaggio et al. 1986, 6-7).

The DEW program sought to examine the potential for using laser and/or particle beams for ballistic missile defense. DEW can deliver destructive energy to targets near or at light speed and are particularly attractive for using against missiles as they rise through the atmosphere. Successfully engaging missiles during these flight stages can allow missiles to be destroyed before they release multiple independently targeted warheads. Relevant weapon concepts studied under DEW included space-based lasers, ground-beam lasers using orbiting relay mirrors, space-based neutral particle beams, and endoatmospheric charged particle beams guided by low-power lasers (DiMaggio et al. 1986, 7-8).

KEW program applications involved studying ways of accurately directing fairly light objects at high speed to intercept missiles or warheads during any flight phase. Technologies being investigated by this program include space-based chemically launched projectiles with homing devises and space-based electromagnetic rail guns (DiMaggio et al. 1986, 8).

Research pertinent to SC/BM programs explores defensive architecture options allowing for deployment of extremely responsive, reliable, survivable, and cost-effective battle management and command, control, and communications systems. Factors examined in such programs must include mission objectives, offensive threat analyses, technical capabilities, risk, and cost (DiMaggio et al. 1986, 8-9).

SLKT program components seek to support research and technology development for improving system effectiveness and satisfying system logistical needs. Such survivability and lethality study efforts seek to produce information about expected enemy threats and the ability of SDI systems to survive efforts to destroy or defeat it. Relevant SLKT supporting technology research areas include space transportation and power, orbital maintenance, and energy storage and conversion. Pertinent SDI logistical research, under program auspices, is crucial for evaluating and reducing deployment and operational costs (DiMaggio et al. 1986, 10).

SDI achieved significant program and technical accomplishments over the next decade. A June 1984 Homing Overlay Experiment achieved the first kinetic kill intercept of an ICBM reentry vehicle, SDIO established an Exoatmospheric Reentry Vehicle Interceptor Subsystem (ERIS) Project Office in July 1984, and a High Endoatmospheric Defense Interceptor (HEDI) Project Office in October 1984. March 1985 saw Weinberger invite allied participation in U. S. ballistic missile defense programs, and in October 1985 National Security Advisor Robert McFarlane (1937-) introduced a controversial “broad interpretation” of the ABM Treaty, which asserted that certain space-based and mobile ABM systems and components such as lasers and particle beams could be developed and tested but not deployed (U. S. Army Space and Missile Defense Command, n. d. 2-3; U. S. Congress, Senate Committee on Armed Services, Subcommittee on Theater and Strategic Nuclear Forces 1986, 136-144).

During August 1986 the Army’s vice chief of staff approved the U. S. Army Strategic Defense Command theater missile defense research program, and the following month this official also directed the establishment of a Joint Theater Missile Defense Program Office in Huntsville, Alabama to coordinate Army theater missile defense requirements. May 1987 saw the successful kinetic energy intercept by the Flexible Lightweight Agile Guided Experiment of a Lance missile, which was a high-velocity, low-altitude target. In July 1988 Hughes Aircraft delivered the Airborne Surveillance Testbed Sensor to the military, which was the most complex long-wavelength infrared sensor built at that time.

February 1989 saw President George H. W. Bush (1924-2018) announce that his administration would continue SDI developments; a June 1989 national defense strategy review concluded that SDI program goals were sound; SDIO approved an Endoatmospheric/Exoatmospheric Interceptor program during summer 1990 to succeed HEDI; the first successful ERIS intercept took place during January 1991; and in June 1991 there were successful tests of the lightweight exoatmospheric projectile integrated vehicle strap down and free flight hover (U. S. Army Space and Missile Defense Command n. d., 3-4; U. S. Department of Defense 1989, 1-31).

SDI was able to achieve significant accomplishments during the 1980s and early 1990s as the list above demonstrates. The program remained controversial during its first decade before SDIO was renamed the Ballistic Missile Defense Organization (BMDO) by the Clinton administration on June 14, 1994 (U. S. Department of Defense 1994, 1).

Program expenditures remained a source of controversy for some congressional appropriators. SDIO’s budget, according to a 1989 DOD report, was $3.8 billion for fiscal year 1989 representing 0.33% of the $282.4 defense budget for that year (U. S. Department of Defense 1989, 27). A 1992 congressional review of SDIO expenditures quantified that the organization had received $25 billion since 1984 for ballistic missile defense system research and development and that the Bush administration’s proposed fiscal year 1992 budget estimated system acquisition costs to be $46 billion (U. S. General Accounting Office 1992(a), 10).

Strategic Defense Initiative [SDI] II

Changing SDI program objectives complicated SDIO’s work and operational efficiency. SDI was originally intended to provide a massive system for defending the United States against Soviet ballistic missile attacks. During 1987 program objectives shifted from defending against massive missile strikes to deterring such strikes. The 1990 introduction of the Brilliant Pebbles space-based interceptor (see next entry) caused SDIO to change organizational direction again. A number of organizational realignments were implemented during September 1988 such as adding a chief of staff to oversee SDIO activities; adding a chief engineer to ensure multiple engineering tasks and analysis received top-level attention; and the creation of a Resource Management Directorate, which merged Comptroller and Support Services Directorates in an effort to enhance management efficiency (U. S. General Accounting Office 1992(a), 2; Federation of American Scientists n. d., 8).

Operation Desert Storm also heralded important changes in SDIO program activities. The use of Patriot missile batteries against Iraqi Scud missiles during this 1991 conflict achieved some success but with significant attending controversy over how successful the Patriot system had actually performed (U. S. Congress, House Committee on Government Operations, Subcommittee on Legislation and National Security 1993; Snodgrass 1993).

During his January 29, 1991 State of the Union address to Congress as this conflict raged, President Bush announced another SDI shift to the concept called Global Protection Against Limited Strikes (GPALS) as reflected in the following statement “. . . I have directed that the Strategic Defense Initiative program be refocused on providing protection from limited ballistic missile strikes, whatever their source. Let us pursue an SDI pro- gram that can deal with any future threat to the United States, to our forces overseas and to our friends and allies” (Public Papers of the Presidents of the United States 1991, 78).

This shift to GPALS came about as a result of a perceived decline in the Soviet missile threat and the emergence of tactical ballistic missile threats from Iraq and other third world countries. GPALS would have two ground-based and one space-based segment. One of the ground-based components would consist of sensors and interceptors to protect U. S. and allied missile forces overseas from missile attack. An additional ground- based segment would protect the United States from accidental or limited attacks of up to 200 warheads. GPALS spaced-based component would help detect and intercept missiles and warheads launched from anywhere in the world. SDIO sought to integrate these three segments to provide mutual coordinated support and required that each of these entities be designed to work together using automated data processing and communication networks (U. S. General Accounting Office 1992(a), 2-3).

This governmental emphasis on localized theater, as opposed to global strategic missile defense, was also reflected in the fiscal year 1991 congressional conference committee report on the defense budget issued October 24, 1990. This legislation called for the secretary of Defense to establish a centrally managed theater missile defense program funded at $218,249,000, required DOD to accelerate research and development on theater and tactical ballistic missile defense systems, and called for the inclusion of Air Force and Navy requirements in such a plan and the participation of these services (U. S. Congress, House Committee on Appropriations 1990, 117-118).

SDI and the concept of ballistic missile defense continued generating controversy throughout its first decade. Although SDIO was able to achieve relatively viable funding and enough operational successes to retain sufficient political support within DOD and in Congress to persevere as an organization, its organizational mission focus never remained constant. Contentiousness over whether there was even a need for SDI or ballistic missile defense was reflected in the following 1991 statements before House Government Operations Committee oversight hearings on SDI.

Opponents of SDI such as Federation of American Scientist’s Space Policy Project director John Pike claimed that ballistic missile threats to the United States were examples of hyperbolic rhetoric, that SDI was too expensive, had numerous technical problems, and that its deployment could jeopardize international arms control. Pike described SDI as being a “Chicken Little” approach to existing threats, which would cost more than $100 billion instead of current projections of $40 billion. He also contended that SDI had significant computing and software problems, that its deployment would end the ABM Treaty and imperil arms control progress, and there was no compelling reason to deploy SDI based on the existing strategic environment (U. S. Congress, House Committee on Governmental Operations, Legislation and National Security Subcommittee 1992, 194).

Proponents of SDI such as Keith Payne of the National Institute for Public Policy emphasized how the Iraqi use of Scud missiles during Operation Desert Storm had drastically changed Cold War strategic assumptions about how ballistic missiles might be used in future military conflicts. These proponents stressed the threat to civilians from missiles that could carry chemical warheads, how normal life ended in cities threatened by Iraqi Scud attacks, how Iraqi conventionally armed missile attacks during the Iran-Iraq War caused nearly 2,000 deaths, forced the evacuation of urban areas like Tehran with ruinous economic consequences, and warned that such events could happen to U. S. and allied metropolitan areas due to ballistic missile proliferation (U. S. Congress, House Committee on Government Operations, Subcommittee on Legislation and National Security 1992, 284).

SDI supporters further stressed how the presence of ballistic missiles equipped with weapons of mass destruction in the hands of third world countries such as Iraq could drastically reduce the flexibility of U. S. leaders in responding to such threats. Examples of this reduced flexibility would involve U. S. leaders having to assess the possibility of third party ballistic missile strikes against U. S. forces, allies, or U. S. population centers, sufficient to limit the president’s freedom of action to respond; emerging ballistic missile threats could have a debilitating effect on the U. S. capability to establish allied coalitions and respond to aggression as it did in Operation Desert Storm; and activities such as escorting threatened commercial shipping through hostile waters during the Iran-Iraq War or militarily evacuating U. S. citizens from foreign hot spots could become increasingly dangerous. Payne and other missile defense supporters stress that such defenses enable the United States to maintain the credibility of its overseas security commitments and encourage the belief that the United States will not be deterred from defending its national interests and allies (U. S. Congress, House Committee on Government Operations, Subcommittee on Legislation and National Security 1992, 284-285).

SDIO continued its activities as the Clinton administration began in 1993. Ballistic missile defense was not high on the national security priorities of this administration as it took office (Lindsay and O’Hanlon 2001, 87). SDIO’s initial institutional incarnation came to an end with DOD Directive 5134.9 on June 14, 1994, which established the BMDO as the organizational focal point for U. S. ballistic missile defense efforts. The now preponderant emphasis on developing defenses against theater ballistic missile threats, while also adhering to the ABM Treaty, was reflected in BMDO’s mission, whose characteristics included deploying an effective and rapidly mobile theater missile defense system to protect forward-deployed and expeditionary components of U. S. and allied armed forces; defending the U. S. homeland against limited ballistic missile attacks; demonstrating advanced technology options for enhanced missile defense systems including space-based defenses and associated sensors; making informed decisions on development, production, and deployment of such systems in consultation with U. S. allies; and adhering to existing international agreements and treaty obligations while using non-nuclear weapon technologies (U. S. Department of Defense 1994, 1-2). SDI may have been conceived and initially presented with idealistic fervor, but its inception was driven by profound and substantive dissatisfaction with the military and moral predicament of the United States being unable to defend its population and military interests against hostile ballistic missile attacks. SDI and its successor programs have survived and evolved into contemporary national missile defense programs because of their ability to pragmatically adapt to prevailing political, economic, and military environments facing the United States and its national security interests (Clagett 1996).

ASAT (antisatellite) weapons

Concern over a growing Soviet ASAT program caused the Reagan administration to begin efforts to remove congressional restrictions on testing ASAT capability in space. This concern resulted in the February 6, 1987 issuance of NSDD 258 in which DOD and the Air Force requested funding to conduct relevant research and development efforts in this area and that further study of long-range U. S. ASAT requirements should continue (U. S. National Security Council 2006(a), 255-256).

A final noteworthy Reagan administration space policy document was NSDD 293 on national space policy issued January 5, 1988. This document reaffirmed that the United States was committed to peacefully exploring and using outer space and that peaceful purposes allowed for military and intelligence-related activities in pursuit of national security and other goals, that the United States would pursue these military and intelligence activities to support its inherent self-defense rights and defense commitments to allies, that the United States rejected the claims of other nations to sovereignty over space or celestial bodies, that there can be limits on the fundamental right of sovereign nations to acquire data from space, and that the United States considers other national space systems to have the right to pass through and conduct space operations without interference (U. S. National Security Council 2006(b), 13-14).

This document went on to outline four basic DOD space mission areas including space support, force enhancement, space control, and force application. Space support guidelines stressed that military and intelligence space sectors could use manned and un- manned launch systems as determined by specific DOD or intelligence mission requirements. Force enhancement guidelines stressed that DOD would work with the intelligence community to develop, operate, and maintain space systems and develop appropriate plans and structures for meeting the operational requirements of land, sea, and air forces through all conflict levels. Space control guidelines stressed that DOD would develop, operate, and maintain enduring space systems to ensure freedom of action in space and deny such mobility to adversaries, that the United States would develop and deploy a comprehensive ASAT capability including both kinetic and directed energy weapons, and that DOD space programs would explore developing a space assets survivability enhancement program emphasizing long-term planning for future requirements. Where force application was concerned, this document proclaimed that DOD would, consistent with treaty requirements, conduct research, development, and planning to be prepared to acquire and deploy space weapons systems if national security conditions required them (U. S. National Security Council 2006(b), 15-16).

Projecting force from space was a particularly significant new facet of U. S. military space policy asserted in this document. This statement also reflected a belief in many governmental sectors that space was comparable to air, land, and sea war-fighting environments and that space combat operations should be pursued to defend national interests and enhance national security. NSDD 293 culminated a period of significant growth in U. S. military space policy during the Reagan presidency. This administration saw AFSPACOM established in 1982 to consolidate space activities and link space-related research and development with operational space users. Army and Navy space commands were also created during this time and USSPACECOM was established in 1985 as a unified multi – service space command. Additional Reagan administration developments in military space policy included establishing a Space Technology Center at New Mexico’s Kirtland Air Force Base; forming a DOD Space Operations Committee, elevating NORAD’s commander in chief to a four-star position and broadening that position’s space responsibilities; creating a separate Air Force Space Division and establishing a deputy commander for Space Operations; constructing a consolidated Space Operations Center; creating a Directorate for Space Operations in the Office of the Deputy Chief of Staff/Plans and Operations in Air Force headquarters; establishing SDIO; and establishing a space operations course at the Air Force Institute of Technology (Lambakis 2001, 229-230).

Broader Implications of the Strategic Defense Initiative

Though compelling for its parsimonious logic, mutually assured destruction came at a cost. By holding each other’s population hostage to nuclear annihilation, the superpowers reinforced preexisting ideological and geopolitical hostility. From this angle, the underlying logic of mutual vulnerability violated civilized norms, a necessary evil in the absence of viable policy alternatives such as disarmament or strategic defense. This calculus changed after the Reagan administration took office.

Long before assuming the Oval Office, Ronald Reagan developed an interest in strategic defense, stimulated in part by visits to Lawrence Livermore Laboratory and the North American Air Defense Command. Reagan’s distaste for mutually assured destruction, combined with technological advances, convinced him to embark on a new strategic path, against the advice of some of his closest advisors. After deeming mutually assured destruction immoral in a nationally televised address on March 23, 1983, President Reagan challenged the scientific community “to give us the means of rendering these nuclear weapons impotent and obsolete.” Some critics immediately challenged the president on purely technical grounds, saying it would never work. Others deemed it provocative, believing the Soviets would conclude that the Strategic Defense Initiative (SDI) was a cover for the United States to achieve a first-strike capability.

Though clearly aspirational in the near term, SDI jarred the Soviets. So did Reagan’s later refusal to consider SDI a bargaining chip at Reykjavik, Iceland, talks in October 1986. After spending decades and hundreds of billions building their land-based ICBM force, Moscow found itself at the wrong end of a cost-imposition strategy. To be sure, the ensuing SDI debate included lots of talk about the potential for the Soviets to use cheap countermeasures to defeat an American missile defense system. Since Reagan’s initial concept of SDI did not specify technologies in advance, these scenarios reflected far more speculation than analysis. This much was clear: The United States changed the strategic debate to favor its own technological potential at the expense of the Soviet Union.

The Reagan administration’s newfound commitment to strategic defense exploited the Soviet’s relative disadvantage in microelectronics and computer technology. The Soviet leadership grasped the implications of this revolution even before Reagan announced his SDI initiative. In the past, the Soviets’ wide-ranging espionage efforts offset some of the US technological advantages, as Soviet agents pilfered certain weapons designs, including, most spectacularly, that of the atomic weapon. But the microelectronics-based revolution was too broad and too deep for the Soviets to steal their way to technological equivalency. Equally problematic, the Soviet’s centralized economic system lacked the ability to create disruptive technologies of its own, let alone match those of the United States. In this case, as it usually does, entrepreneurship handily beat centralized planning in generating technological innovation.

1812 – Russia’s War Machine I

Apart from the Romanovs, the greatest beneficiaries of eighteenth-century Russia’s growing wealth were the small group of families who dominated court, government and army in this era and formed the empire’s aristocratic elite. Some of these families were older than the Romanovs, others were of much more recent origin, but by Alexander I’s reign they formed a single aristocratic elite, united by wealth and a web of marriages. Their riches, social status and positions in government gave them great power. Their patron–client networks stretched throughout Russia’s government and armed forces. The Romanovs themselves came from this aristocratic milieu. Their imperial status had subsequently raised them far above mere aristocrats, and the monarchs were determined to preserve their autonomy and never allow themselves to be captured by any aristocratic clique. Nevertheless, like other European monarchs they regarded these aristocratic magnates as their natural allies and companions, as bulwarks of the natural order and hierarchy of a well-run society.

The aristocracy used a number of crafty ways to preserve their power. In the eighteenth century they enlisted their sons in Guards regiments in childhood. By the time they reached their twenties, these sprigs of the aristocracy used their years of ‘seniority’ and the privileged status of the Guards to jump into colonelcies in line regiments. Catherine the Great’s son, Paul I, who reigned from 1796 to 1801, stopped this trick but very many of the aristocrats in senior posts in 1812–14 had benefited from it. Even more significant was the use made by the aristocracy of positions at court. Though mostly honorific, these positions allowed young gentlemen of the bedchamber (Kammerjunker) and lords in waiting (Kammerherr) to transfer into senior positions in government of supposedly equivalent rank.

In the context of eighteenth-century Europe there was nothing particularly surprising about this. Young British aristocrats bought their way rapidly up the military hierarchy, sat in Parliament for their fathers’ pocket boroughs and sometimes inherited peerages at a tender age. Unlike the English, Russian aristocrats did not control government through their domination of Parliament. A monarch who bungled policy or annoyed the Petersburg elite too deeply could be overthrown and murdered, however. Paul I once remarked that there were no Grands Seigneurs in Russia save men who were talking to the emperor and even their status lasted only as long as the emperor deigned to continue the conversation. He was half correct: Russian magnates were more subservient and less autonomous than their equivalents in London or Vienna. But he was also half wrong and paid for his miscalculation with his life in 1801, when he was murdered by members of the aristocracy, outraged by his arbitrary behaviour, led by the governor-general of Petersburg, Count Peter von der Pahlen.

The Russian aristocracy and gentry made up the core of the empire’s ruling elite and officer corps. But the Romanovs ruled over a multi-ethnic empire. They allied themselves to their empire’s non-Russian aristocracies and drew them into their court and service. The most successful non-Russian aristocrats were the German landowning class in the Baltic provinces. By one conservative estimate 7 per cent of all Russian generals in 1812 were Baltic German nobles. The Balts partly owed their success to the fact that, thanks to the Lutheran Church and the eighteenth-century Enlightenment in northern Europe, they were much better educated than the average Russian provincial noble.

There was nothing unusual at the time in an empire being ruled by diverse and alien elites. In its heyday, the Ottoman ruling elite was made up of converted Christian slaves. The Ching and Mughal empires were run by elites who came from beyond the borders of China or the subcontinent. By these standards, the empire of the Romanovs was very Russian. Even by European standards the Russian state was not unique. Very many of the Austrian Empire’s leading soldiers and statesmen came from outside the Habsburgs’ own territories. None of Prussia’s three greatest heroes in 1812–14 – Blücher, Scharnhorst or Gneisenau – was born a Prussian subject or began his career in the Prussian army.

It is true that there were probably more outsiders in the Russian army than in Austria or Prussia. European immigrants also stood out more sharply in Petersburg than in Berlin or Vienna. In the eighteenth century many European soldiers and officials had entered Russian service in search of better pay and career prospects. In Alexander’s reign they were joined by refugees fleeing the French Revolution or Napoleon. Above all, European immigrants filled the gap created by the slow development of professional education or a professional middle class in Russia. Doctors were one such group. Even in 1812 there were barely 800 doctors in the Russian army, many of them of German origin. Military engineers were also in short supply. In the eighteenth century Russian engineers had been the younger brothers of the artillery and came under its jurisdiction. Though they gained their independence under Alexander, there were still too few trained engineer officers trying to fulfil too diverse a range of duties and Russia remained in search of foreign experts whom it might lure into its service. On the eve of 1812 the two most senior Russian military engineers were the Dutchman Peter van Suchtelen and the German Karl Oppermann.

An even more important nest of foreigners was the quartermaster-general’s department, which provided the army’s general staff officers. Almost one in five of the ‘Russian’ staff officers at the battle of Borodino were not even subjects of the tsar. Fewer than half had Slav surnames. The general staff was partly descended from the bureau of cartography, a very specialized department which required a high level of mathematical skill. This ensured that it would be packed with foreigners and non-Russians. As armies grew in size and complexity in the Napoleonic era, the role of staffs became crucial. This made it all the more galling for many Russians that so large a proportion of their staff officers had non-Russian names. In addition, Napoleon’s invasion in 1812 set off a wave of xenophobia in Russia, which sometimes targeted ‘foreigners’ in the Russian army, without making much distinction between genuine foreigners and subjects of the tsar who were not ethnic Russians. Without its non-Russian staff officers the empire could never have triumphed in 1812–14, however. Moreover, most of these men were totally loyal to the Russian state, and their families usually in time assimilated into Russian society. These foreign engineers and staff officers also helped to train new generations of young Russian officers to take their places.

For the tsarist state, as for all the other great powers, the great challenge of the Napoleonic era was to mobilize resources for war. There were four key elements to what one might describe as the sinews of Russian power. They were people, horses, military industry and finance. Unless the basic strengths and limitations of each of these four elements is grasped it is not possible to understand how Russia fought these wars or why she won them.

Manpower was any state’s most obvious resource. At the death of Catherine II in 1797 the population of the Russian empire was roughly 40 million. This compared with 29 million French subjects on the eve of the Revolution and perhaps 22 million inhabitants of the Habsburgs’ lands at that time. The Prussian population was only 10.7 million even in 1806. The United Kingdom stood somewhere between Prussia and the larger continental powers. Its population, including the Irish, was roughly 15 million in 1815, though Indian manpower was just becoming a factor in British global might. By European standards, therefore, the Russian population was large but it was not yet vastly greater than that of its Old Regime rivals and it was much smaller than the human resources controlled by Napoleon. In 1812 the French Empire, in other words all territories directly ruled from Paris, had a population of 43.7 million. But Napoleon was also King of Italy, which had a population of 6.5 million, and Protector of the 14 million inhabitants of the Confederation of the Rhine. Some other territories were also his to command: most notably from the Russian perspective the Duchy of Warsaw, whose population of 3.8 million made a disproportionate contribution to his war effort in 1812–14. A mere listing of these numbers says something about the challenge faced by Russia in these years.

From the state’s perspective the great point about mobilizing the Russian population was that it was not merely numerous but also cheap. A private in Wellington’s army scarcely lived the life of a prince but his annual pay was eleven times that of his Russian equivalent even if the latter was paid in silver kopeks. In reality the Russian private in 1812 was far more likely to be paid in depreciating paper currency worth one-quarter of its face value. Comparisons of prices and incomes are always problematic because it is often unclear whether the Russian rubles cited are silver or paper, and in any case the cost of living differed greatly between Russia and foreign countries, above all Britain. A more realistic comparison is the fact that even in peacetime a British soldier received not just bread but also rice, meat, peas and cheese. A Russian private was given nothing but flour and groats, though in wartime these were supplemented by meat and vodka. The soldiers boiled their groats into a porridge which was their staple diet.

A Russian regiment was also sometimes provided not with uniforms and boots but with cloth and leather from which it made its own clothing and footwear. Powder, lead and paper were also delivered to the regiments for them to turn into cartridges. Nor was it just soldiers whose labour was used for free by the state. A small minority of conscripts were sent not to the army but to the mines. More importantly, when Peter the Great first established the ironworks which were the basis of Russian military industry he assigned whole villages to work in them in perpetuity. He did the same with some of the cloth factories set up to clothe his army. This assigned labour was all the cheaper because the workers’ families retained their farms, from which they were expected to feed themselves.

So long as all European armies were made up of long-serving professionals the Russian military system competed excellently. The system of annual recruit levies allowed the Russian army to be the largest and cheapest in Europe without putting unbearable pressure on the population. Between 1793 and 1815, however, changes began to occur, first in France and later in Prussia, which put a question mark against its long-term viability. Revolutionary France began to conscript whole ‘classes’ of young men in the expectation that once the war was over they would return to civilian life as citizens of the new republic. In 1798 this system was made permanent by the so-called Loi Jourdain, which established a norm of six years’ service. A state which conscripted an entire age group for a limited period could put more men in the ranks than Russia. In time it would also have a trained reserve of still relatively young men who had completed their military service. If Russia tried to copy this system its army would cease to be a separate estate of the realm and the whole nature of the tsarist state and society would have to change. A citizen army was barely compatible with a society based on serfdom. The army would become less reliable as a force to suppress internal rebellion. Noble landowners would face the prospect of a horde of young men returning to the countryside who (if existing laws remained) were no longer serfs and who had been trained in arms.

In fact the Napoleonic challenge came and went too quickly for the full implications of this threat to materialize. Temporary expedients sufficed to overcome the emergency. In 1807 and again in 1812–14 the regime raised a large hostilities-only militia despite the fears of some of its own leaders that this would be useless in military terms and might turn into a dangerous threat to the social order. When the idea of a militia was first mooted in the winter of 1806–7, Prince I. V. Lopukhin, one of Alexander’s most senior advisers, warned him that ‘at present in Russia the weakening of ties of subordination to the landowners is more dangerous than foreign invasion’. The emperor was willing to take this risk and his judgement proved correct. The mobilization of Russian manpower through a big increase in the regular army and the summoning of the militia just sufficed to defeat Napoleon without requiring fundamental changes in the Russian political order.

Next only to men as a military resource came horses, with which Russia was better endowed than any other country on earth. Immense herds dwelt in the steppe lands of southern Russia and Siberia. These horses were strong, swift and exceptionally resilient. They were also very cheap. One historian of the Russian horse industry calls these steppe horses ‘a huge and inexhaustible reserve’. The closest the Russian cavalry came to pure steppe horses was in its Cossack, Bashkir and Kalmyk irregular regiments. The Don Cossack horse was ugly, small, fast and very easy to manoeuvre. It could travel great distances in atrocious weather and across difficult terrain for days on end and with minimal forage in a way that was impossible for regular cavalry. At home the Cossack horse was always out to grass. In winter it would dig out a little trench with its front hoofs to expose roots and grasses hidden under the ice and snow. Cossacks provided their own horses when they joined the army, though in 1812–14 the government did subsidize them for animals lost on campaign. Superb as scouts and capable of finding their way across any terrain even in the dark, the Cossacks also spared the Russian regular light cavalry many of the duties which exhausted their equivalents in other armies: but the Russian hussar, lancer and mounted jaeger regiments also themselves had strong, resilient, cheap and speedy horses with a healthy admixture of steppe blood.

Traditionally the medium (dragoon) and heavy (cuirassier) horses had been a much bigger problem. In fact on the eve of the Seven Years War Russia had possessed no viable cuirassier regiments and even her dragoons had been in very poor shape. By 1812, however, much had changed, above all because of the huge expansion of the Russian horse-studs industry in the last decades of the eighteenth century. Two hundred and fifty private studs existed by 1800, almost all of which had been created in the last forty years. They provided some of the dragoon and most of the cuirassier horses. British officers who served alongside the Russians in 1812–14 agreed that the heavy cavalry was, in the words of Sir Charles Stewart, ‘undoubtedly very fine’. Sir Robert Wilson wrote that the Russian heavy cavalry ‘horses are matchless for an union of size, strength, activity and hardiness; whilst formed with the bulk of the British cart-horse, they have so much blood as never to be coarse, and withal are so supple as naturally to adapt themselves to the manege, and receive the highest degree of dressing’.

If there was a problem with the Russian cuirassier horse it was perhaps that it was too precious, at least in the eyes of Alexander I. Even officially these heavy cavalry horses cost two and a half times as much as a hussar’s mount, and the horses of the Guards cuirassiers – in other words the Chevaliers Gardes and Horse Guard regiments – cost a great deal more. Their feeding and upkeep were more expensive than that of the light cavalry horses and, as usual with larger mounts, they had less endurance and toughness. Since they came from studs they were also much harder to replace. Perhaps for these reasons, in 1813–14 the Russian cuirassiers were often kept in reserve and saw limited action. Alexander was furious when on one occasion an Austrian general used them for outpost duty and allowed them to sustain unnecessary casualties.

Russian military industry could usually rely on domestic sources for its raw materials with some key exceptions. Much saltpetre needed to be imported from overseas and so too did lead, which became an expensive and dangerous weakness in 1807–12 when the Continental System hamstrung Russian overseas trade. Wool for the army’s uniforms was also a problem, because Russia only produced four-fifths of the required amount. There were also not enough wool factories to meet military demand as the army expanded after 1807. The truly crucial raw materials were iron, copper and wood, however, and these Russia had in abundance. At the beginning of Alexander’s reign Russia was still the world’s leading iron producer and stood second only to Britain in copper. Peter the Great had established the first major Russian ironworks to exploit the enormous resources of iron ore and timber in the Urals region, on the borders of Europe and Siberia. Though Russian metallurgical technology was beginning to fall well behind Britain, it was still more than adequate to cover military needs in 1807–14. The Ural region was far from the main arms-manufacturing centres in Petersburg and in the city of Tula, 194 kilometres south of Moscow, but efficient waterways linked the three areas. Nevertheless, any arms or ammunition produced in the Urals works would not reach armies deployed in Russia’s western borderlands for over a year.

Arms production fell into two main categories: artillery and handheld weapons. The great majority of Russian iron cannon were manufactured in the Alexander Artillery Works in Petrozavodsk, a small town in Olonets province north-east of Petersburg. They were above all designed for fortresses and for the siege train. Most of the field artillery came from the St Petersburg arsenal: it produced 1,255 new guns between 1803 and 1818. The technology of production was up to date in both works. In the Petersburg Arsenal a steam-powered generator was introduced in 1811 which drove all its lathes and its drilling machinery. A smaller number of guns were produced and repaired in the big depots and workshops in Briansk, a city near the border of Russia and Belorussia. Russian guns and carriages were up to the best international standards once Aleksei Arakcheev’s reforms of the artillery were completed by 1805. The number of types of gun was reduced, equipment was standardized and lightened, and careful thought went into matching weapons and equipment to the tactical tasks they were intended to fulfil. The only possible weakness was the Russian howitzers, which could not be elevated to the same degree as the French model and therefore could not always reach their targets when engaged in duels with their French counterparts. On the other hand, thanks to the lightness of their carriages and the quality of their horses the Russian horse artillery was the most mobile and flexible on the battlefield by 1812–14.

The Russian Army of 1812

The Russian Army of 1812

Russian Artillery of the Napoleonic War

1812 – Russia’s War Machine II

The situation as regards handheld firearms was much less satisfactory. Muskets were produced in three places: the Izhevsk works in Viatka province near the Urals turned out roughly 10 per cent of all firearms manufactured in 1812–14: many fewer were produced at the Sestroretsk works 35 kilometres from Petersburg, though Sestroretsk did play a bigger role in repairing existing weapons; the city of Tula was therefore by far the most important source of muskets in 1812–14.

The Tula state arms factory had been founded by Peter the Great in 1712 but production was shared between it and private workshops. In 1812, though the state factory produced most of the new muskets, six private entrepreneurs also supplied a great many. These entrepreneurs did not themselves own factories, however. They met state orders partly from their own rather small workshops but mostly by subcontracting the orders to a large number of master craftsmen and artisans who worked from their own homes. The war ministry complained that this wasted time, transport and fuel. The state factory was itself mostly just a collection of smallish workshops with production often by hand. The labour force was divided into five crafts: each craft was responsible for one aspect of production (gun barrels, wooden stocks, firing mechanisms, cold steel weapons, all other musket parts). Producing the barrels was the most complicated part of the operation and caused most of the delays, partly because skilled labour was in short supply.

The biggest problem both in the factory and the private workshops was out-of-date technology and inadequate machine tools. Steam-powered machinery was only introduced at the very end of the Napoleonic Wars and in any case proved a failure, in part because it required wood for fuel, which was extremely expensive in the Tula region. Water provided the traditional source of power and much more efficient machinery was introduced in 1813 which greatly reduced the consumption of water and allowed power-based production to continue right through the week. Even after the arrival of this machinery, however, shortage of water meant that all power ceased for a few weeks in the spring. In 1813, too, power-driven drills for boring the musket barrels were introduced: previously this whole job had been done by hand by 500 men, which was a serious brake on production. A Russian observer who had visited equivalent workshops in England noted that every stage in production there had its own appropriate machine tools. In Tula, on the contrary, many specialist tools, especially hammers and drills, were not available: in particular, it was almost impossible to acquire good steel machine tools. Russian craftsmen were sometimes left with little more than planes and chisels.

Given the problems it faced, the Russian arms industry performed miracles in the Napoleonic era. Despite the enormous expansion of the armed forces in these years and heavy loss of weapons in 1812–14, the great majority of Russian soldiers did receive firearms and most of them were made in Tula. These muskets cost one-quarter of their English equivalents. On the other hand, without the 101,000 muskets imported from Britain in 1812–13 it would have been impossible to arm the reserve units which reinforced the field army in 1813. Moreover, the problems of Russian machine tools and the tremendous pressures for speed and quantity made it inevitable that some of these muskets would be sub-standard. One British source was very critical of the quality of Tula muskets in 1808, for example. On the other hand, a French test of muskets’ firing mechanisms concluded that the Russian models were somewhat more reliable than their own, though much less so than the British and Austrian ones. The basic point was that all European muskets of this era were thoroughly unreliable and imperfect weapons. The Russian ones were undoubtedly worse than the British, and probably often worse than those of the other major armies too. Moreover, despite heroic levels of production in 1812–14 the Russian arms industry could never supply enough new-model muskets to ensure that all soldiers in a battalion had one type and calibre of firearm, though once again Russia’s was an extreme example of a problem common to all the continental armies.

Perhaps the quality of their firearms did exert some influence on Russian tactics. It would have been an optimistic Russian general who believed that men armed with these weapons could emulate Wellington’s infantry by deploying in two ranks and repelling advancing columns by their musketry. The shortcomings of the Russian musket were possibly an additional reason for the infantry to fight in dense formations supported by the largest ratio of artillery to foot-soldiers of any European army. However, although the deficiencies of the Russian musket may perhaps have influenced the way the army fought, they certainly did not undermine its viability on the battlefield. The Napoleonic era was still a far cry from the Crimean War, by which time the Industrial Revolution was beginning to transform armaments and the superiority of British and French rifled muskets over Russian smoothbores made life impossible for the Russian infantry.

The fourth and final element in Russian power was fiscal, in other words revenue. Being a great power in eighteenth-century Europe was very expensive and the costs escalated with every war. Military expenditure could cause not just fiscal but also political crisis within a state. The most famous example of this was the collapse of the Bourbon regime in France in 1789, brought on by bankruptcy as a result of the costs of intervention in the American War of Independence. Financial crisis also undermined other great powers. In the midst of the Seven Years War, for example, it forced the Habsburgs substantially to reduce the size of their army.

The impact of finance on diplomatic and military policy continued in the Napoleonic era. In 1805–6 Prussian policy was undermined by lack of funds to keep the army mobilized and therefore a constant threat to Napoleon. Similarly, in 1809 Austria was faced with the choice of either fighting Napoleon immediately or reducing the size of its army, since the state could not afford the current level of military expenditure. The Austrians chose to fight, were defeated, and were then lumbered with a war indemnity which crippled their military potential for years to come. An even more crushing indemnity was imposed on Prussia in 1807. In 1789 Russia had a higher level of debt than Austria or Prussia. Inevitably the wars of 1798–1814 greatly increased that debt. Unlike the Austrians or Prussians, in 1807 Russia did not have to pay an indemnity after being defeated by Napoleon. Had it lost in 1812, however, the story would have been very different.

Even without the burdens of a war indemnity Russia suffered financial crisis in 1807–14. Ever since Catherine II’s first war with the Ottomans (1768–74) expenditure had regularly exceeded revenue. The state initially covered the deficit in part by borrowing from Dutch bankers. By the end of the eighteenth century this was no longer possible: interest payments had become a serious burden on the treasury. In any case the Netherlands had been overrun by France and its financial markets were closed to foreign powers. Even before 1800 most of the deficit had been covered by printing paper rubles. By 1796 the paper ruble was worth only two-thirds of its silver equivalent. Constant war after 1805 caused expenditure to rocket. The only way to cover the cost was by printing more and more paper rubles. By 1812 the paper currency was worth roughly one-quarter of its ‘real’ (i.e. silver) value. Inflation caused a sharp rise in state expenditure, not least as regards military arms, equipment and victuals. To increase revenue rapidly enough to match costs was impossible. Meanwhile the finance ministry lived in constant dread of runaway inflation and the complete collapse in trust in the paper currency. Even without this, dependence on depreciating paper currency had serious risks for the Russian army’s ability to operate abroad. Some food and equipment had to be purchased in the theatre of operations, above all when operating on the territory of one’s allies, but no foreigner would willingly accept paper rubles in return for goods and services.

At the death of Catherine II in 1796 Russian annual revenue amounted to 73 million rubles or £11.7 million; if collection costs are included this sinks to £8.93 million, or indeed lower if the depreciating value of the paper ruble is taken into account. Austrian and Prussian revenues were of similar order: in 1800, for example, Prussian gross revenue was £8.65 million: in 1788 Austrian gross revenue had been £8.75 million. Even in 1789, with her finances in deep crisis, French royal revenue at 475 million francs or £19 million was much higher. Britain was in another league again: the new taxes introduced in 1797–9 raised her annual revenue from £23 million to £35 million.

If Russia nevertheless remained a formidable great power, that was because crude comparisons of revenue across Europe have many flaws. In addition, as we have seen in this chapter, the price of all key military resources was far cheaper in Russia than, for example, in Britain. Even in peacetime, the state barely paid at all for some services and goods. It even succeeded in palming off on the peasantry part of the cost of feeding most of the army, which was quartered in the villages for most of the year. In 1812 this principle was taken to an extreme, with massive requisitioning and even greater voluntary contributions. One vital reason why Russia had been victorious at limited cost in the eighteenth century was that it had fought almost all its wars on enemy territory and, to a considerable extent, at foreign expense. This happened too in 1813–14.

In 1812–14 the Russian Empire defeated Napoleon by a narrow margin and by straining to breaking point almost every sinew of its power. Even so, on its own Russia could never have destroyed Napoleon’s empire. For this a European grand alliance was needed. Creating, sustaining and to some extent leading this grand alliance was Alexander I’s greatest achievement. Many obstacles lay in Alexander’s path. To understand why this was the case and how these difficulties were overcome requires some knowledge of how international relations worked in this era.

Alexander I understood the power of regimental solidarity and tried to preserve it by ensuring that as far as possible officers remained within a single regiment until they reached senior rank. Sometimes this was a losing battle since officers could have strong personal motivation for transfer. Relatives liked to serve together. A more senior brother or an uncle in the regiment could provide important patronage. Especially in wartime, the good of the service sometimes required transferring officers to fill vacancies in other regiments. So too did the great expansion of the army in Alexander’s reign. Seventeen new regiments were founded between 1801 and 1807 alone: experienced officers needed to be found for them. In these circumstances it is surprising that more than half of all officers between the rank of ensign and captain had served in only one regiment, as had a great many majors. Particularly in older regiments such as the Grenadiers, the Briansk or Kursk infantry regiments, or the Pskov Dragoons the number of officers up to the rank of major who had spent their whole lives in the regiments was extremely high. As one might expect, the Preobrazhensky Guards, the senior regiment in the Russian army, was the extreme case, with almost all the officers spending their whole careers in the regiment. Add to this the fact that the overwhelming majority of Russian officers were bachelors and the strength of their commitment to their regiments becomes evident.

Nevertheless, the greatest bearers of regimental loyalty and tradition were the non-commissioned officers. In the regiments newly formed in Alexander’s reign, the senior NCOs arrived when the regiment was created and served in it for the rest of their careers. Old regiments would have a strong cadre of NCOs who had served in the unit for twenty years or more. In a handful of extreme cases such as the Briansk Infantry and Narva Dragoons every single sergeant-major, sergeant and corporal had spent his entire military life in the regiment. In the Russian army there was usually a clear distinction between the sergeant-majors (fel’dfebeli in the infantry and vakhmistry in the cavalry) on the one hand, and the ten times more numerous sergeants and corporals (unterofitsery) on the other. The sergeants and corporals were mostly peasants. They gained their NCO status as veterans who had shown themselves to be reliable, sober and skilled in peacetime, and courageous on the battlefield. Like the conscript body as a whole, the great majority of them were illiterate.

The sergeant-majors on the other hand were in the great majority of cases literate, though particularly in wartime some illiterate sergeants who had shown courage and leadership might be promoted to sergeant-major. Many were the sons of priests, but above all of the deacons and other junior clergy who were required to assist at Orthodox services. Most sons of the clergy were literate and the church could never find employment for all of them. They filled a key gap in the army as NCOs. But the biggest source of sergeant-majors were soldiers’ sons, who were counted as hereditary members of the military estate. The state set up compulsory special schools for these boys: almost 17,000 boys were attending these schools in 1800. In 1805 alone 1,893 soldiers’ sons entered the army. The education provided by the schools was rudimentary and the discipline was brutal but they did train many drummers and other musicians for the army, as well as some regimental clerks. Above all, however, they produced literate NCOs, imbued with military discipline and values from an early age. As befitted the senior NCO of the Russian army’s senior regiment, the regimental sergeant-major of the Preobrazhenskys in 1807, Fedor Karneev, was the model professional soldier: a soldier’s son with twenty-four years’ service in the regiment, an unblemished record, and a military cross for courage in action.

Although the fundamental elements of the Russian army were immensely strong, there were important weaknesses in its tactics and training in 1805. With the exception of its light cavalry, this made it on the whole inferior to the French. The main reason for this was that the French army had been in almost constant combat with the forces of other great powers between 1792 and 1805. With the exception of the Italian and Swiss campaigns of 1799–1800, in which only a relatively small minority of regiments participated, the Russian army lacked any comparable wartime experience. In its absence, parade-ground values dominated training, reaching absurd levels of pedantry and obsession at times. Partly as a result, Russian musketry was inferior to French, as was the troops’ skill at skirmishing. The Russians’ use of massed bayonet attacks to drive off skirmishers was costly and ineffective. In 1805–6 Russian artillery batteries were often poorly shielded against the fire of enemy skirmishers.

The army’s worst problems revolved around coordination above the level of the regiment. In 1805 there were no permanent units of more than regimental size. At Austerlitz, Russian and Austrian columns put together at the last moment manoeuvred far less effectively than the permanent French divisions. In 1806 the Russians created their own divisions but coordination on the battlefield remained a weakness. The Russian cavalry would have been hard pressed to emulate Murat’s massed charge at Eylau. The Russian artillery certainly could not have matched the impressive concentration and mobility of Senarmont’s batteries at Friedland.

Most important, however, were weaknesses in the army’s high command, meaning the senior generals and, above all, the supreme commanders. At this level the Russians were bound to be inferior to the French. No one could match a monarch who was also a military genius. Although the Russian military performance was hampered by rivalry among its generals, French marshals cooperated no better in Napoleon’s absence. When Alexander seized effective command from Kutuzov before Austerlitz the result was a disaster. Thoroughly chastened, Alexander kept away from the battlefield in 1806–7. This solved one problem but created another. In the absence of the monarch the top leader needed to be a figure who could command obedience both by his reputation and by being unequivocally senior to all the other generals. By late 1806, however, all the great leaders of Catherine’s wars were dead. Mikhail Kutuzov was the best of the remaining bunch but he had been out of favour since Austerlitz. Alexander therefore appointed Field-Marshal Mikhail Kamensky to command the army on the grounds of his seniority, experience and relatively good military record. When he reached the army Kamensky’s confused and even senile behaviour quickly horrified his subordinates. As one young general, Count Johann von Lieven, asked on the eve of the first serious battles with the French: ‘Is this lunatic to command us against Napoleon?’

Hungarian Army: 44M “Mace Thrower”

44M. Buzogányvető with the three-legged mount

The first 44M. Buzogányvető’s were mounted on a three-legged mount (or tripod), however with this solution, the Buzogányvető was a bit difficult to move. But, because of the lack of production capacity and time, the HTI didn’t construct new mobile platform/launcher for this weapon, they simply mounted the two rockets and the protection shield on the captured Soviet PM M1910 (Soviet-made Maxim) or SG-43 Goryunov machine guns’ wheeled mount, because the Hungarian Army captured plenty of them during the war.

The unguided ‘Buzogány’ HEAT rocket

44M. Buzogányvető on Krupp Protze truck

At the beginning of hostilities against the Soviets, the Hungarians sorely lacked sufficient anti-armour capability against the rugged and dependable T-34 tank. A large proportion of their weapons were supplied by Germany, but the Germans were not willing to share all their developments, particularly as the war stretched on and the logistical enormity of the war effort on the Eastern Front threatened to overwhelm them. Thus the Hungarians began research and development of their own anti-tank weaponry in 1942.

The Hungarian 44M “Buzogányvető” (translated most closely as “mace thrower” was a Hungarian designed experimental anti-tank rocket for use against Soviet armour towards the end of World War Two. The system allowed for two types of warhead, allowing a multi-purpose role to make it just as effective against enemy infantry. It is since regarded as one of the most effective anti-tank platforms of the War, despite its relatively short production run. The weapons were produced from Spring 1944 until December 20th 1944, when the WM factory fell to the Soviets. Between 600 and 700 were produced in total – the majority utilised in the Defence of Budapest in late December 1944.

The weapon consisted of a launcher capable of holding two rockets with shaped charge rounds, operated by a three-man crew. A tripod-based mount proved inefficient and captured Soviet wheeled mounts were often incorporated into the operational units. The weapons operational parameters made it ideally suited to the close-knit urban warfare of the Siege of Budapest

Two types of rocket were produced, the first was an anti-tank warhead known as ‘Buzogány’ (mace) from which the weapon derived its name. This carried 4.2kg of explosive and was more than capable of tearing through 300mm of armour – more than sufficient for any Soviet heavy tank at an effective range of 1200m. The second warhead was known as ‘Zápor’ (rainfall, shower), and was used in an anti-personnel capacity.

Main features of the 44M. Buzogányvető:

– full length without rockets: 970 mm

– launcher tube length: 523 mm

– launcher tube diameter: 100 mm

– rocket head diameter: 215 mm

– full weight: 29,2 kg

– rocket head weight: 4,2 kg

– range: 500-1200 m, max 2000 m

– penetration: approx. 300 mm

– operating crew: 3

Cold War Intercontinental Ballistic Missiles I

THE US ICBM PROGRAMME

Snark and Navaho

In the immediate post-war years the feeling in the United States was that ballistic missiles offered the best long-term solution for strategic warfare, but that the technology of the time did not appear to make it possible to build a missile with the necessary range (9,300 km) and capable of carrying a nuclear payload, which at that time was large and heavy, weighing some 3 tonnes. The Convair company flight-tested the intercontinental-range MX-774 missile in 1948, but the newly independent US air force decided to follow the path pioneered by the German V-1 ‘flying bomb’ and to develop cruise missiles instead.

The first of these was the N-69 Snark pilotless bomber, which was much larger than the V-1 and had a range of 10,200 km, cruising at a height of some 12,000 m and using a star tracker to update its inertial navigation system. Its speed of 990 km/h meant, however, that, at its extreme range, it took some eleven hours to reach the target. The nose-cone carried a 5 MT (later 20 MT) nuclear warhead, and the missile could approach the target from any direction and at any height, while its very small radar cross-section made it difficult to detect. The Snark entered service in 1957 but was retired in 1961, when the Atlas ballistic missile became operational; its main significance was that it was the first operational missile to bring one superpower within attacking range of the other.

Snark was due to be succeeded by the SM-64A Navaho, a vertically launched, winged cruise missile, which travelled at Mach 3.25 (3,500 km/h) at a height of 18,300 m. Navaho would almost certainly have proved a highly effective strategic weapon, but it never reached production, as the USAF had already transferred its attention to ICBMs.

Redstone and Jupiter

Development of long-range ballistic missiles in the United States in the immediate post-war years was erratic, to say the least. The US army had obtained the plans for the A-4 (V-2) and assembled a number of former German scientists, including Werner von Braun, at the Redstone Arsenal. Their first product was the Redstone short-range (400 km), land-mobile, liquid-fuelled, nuclear-armed missile, which was in service from 1958 to 1963. Next the army started to develop the Jupiter, which was again a land-mobile missile system, but this time with a range of 2,400 km. This was midway through development when, in late 1956, the secretary of state for defense ordered that the US air force was to assume responsibility for all missiles with a range greater than 200 nautical miles (370 km). Development was completed by the USAF, and Jupiter subsequently saw limited service with the air force.

Thor

Having been concentrating on long-range cruise missiles, the USAF now had to make up for a lot of lost ground. Despite having been handed the perfectly acceptable Jupiter by the army, it initiated a very expensive crash programme for its own IRBM, leading to the Thor. This did nothing that Jupiter could not already do, but operated from a fixed base, rather than from a mobile platform. Thor’s 2,700 km range, however, was insufficient for the missile to be launched against the USSR from the continental USA, so it was handed over to the UK air force, which deployed sixty missiles between 1959 and 1964.

The entire Thor storage-and-launch complex was above ground in unprotected shelters, and the missile had be towed out to the launch pad, raised to the vertical, fuelled, prepared, and then launched, the whole process taking fifteen minutes. This was all done in the open, on concrete hard-standing, at well-documented sites, and was very vulnerable. No cost-effective measure to reduce the reaction time could be found, so the missile was phased out after only five years of service.

Atlas

Meanwhile, the USAF’s major development effort had turned to the Atlas missile, which was much larger and was a true ICBM, with a range of 14,000 km. Atlas benefited from much of the technology which had been developed for the Navaho cruise missile, and entered service in 1960.

The first USAF squadron equipped with the Atlas missile used an almost identical siting system to Thor, with six above-ground shelters and each missile having a thirty-minute launch countdown, but the next squadron’s nine missiles were in three separated groups of three, with individual shelters having a split roof, enabling the missiles to be raised to the vertical in situ, thus saving several minutes of launch time. The next three squadrons had similarly dispersed sites, but this time the missiles were housed in semi-hardened bunkers, recessed into the ground and with even greater separation. The final units were housed in hardened underground silos.

Titan

Titan I, which had a range of 10,000 km, was, like the final Atlas, located in silos and raised to the surface for launch; however, it had a new and much faster fuelling system, enabling it to be launched some twenty minutes after the countdown started. There were five Titan I sites, one with eighteen missiles and four with nine each, but the system had only a brief period of service, becoming operational in 1961 and being replaced by Titan II from 1963 onwards, the process being completed in 1966.

Despite its name, Titan II was almost totally different from Titan I, not least because of a 50 per cent increase in range, to 15,000 km. Again, the missiles were sited in squadrons consisting of three widely separated groups of three, with two squadrons at each of three bases, but the new system introduced a completely novel launch system, with the missile being launched from inside the silo. Two other advances in this missile were the use of an inertial guidance system and the use of storable liquid fuel – i.e. the fuel was already loaded in the missile, thus cutting out the time needed to fuel the earlier missiles. In combination these developments resulted in a launch time of just sixty seconds. Fifty-four missiles were deployed, being operational from 1963 to 1987.

Minuteman

By now, the future obviously lay with solid-fuelled missiles, which were safer and more reliable, and in simpler, cheaper and more survivable siting and launch systems. A rail-mobile system was considered for Minuteman I, but the silo option won.

The two-stage Minuteman I was deployed from 1962 onwards in individual unmanned silos, which were scattered over large areas. Ten silos were grouped into a ‘flight’, five flights in a ‘squadron’, and squadrons into ‘wings’; there were four squadrons in each of four wings, while the fifth wing had five squadrons. The overall total was 800 missiles.

Minuteman II was longer and heavier than Minuteman I, with extended range (12,500 km compared to 10,000 km) and a more accurate warhead. It entered service in 1966, and by 1969 it had replaced all Minuteman Is. Of the 450 deployed, ten were subsequently reconfigured to carry the Emergency Rocket Communications System (ERCS) and thus no longer carried nuclear warheads.

Minuteman III introduced a third stage and was also the first US ICBM to carry MIRVs, but its basing and launch systems were the same as those of Minuteman II.

Peacekeeper (MX)

The Missile, Experimental (MX) programme was one of the longest and most controversial in the Cold War, with much of the argument centring on the question of basing. Indeed, MX consumed money at a prodigious rate and gave rise to an industry of its own for many years before it began to make any contribution to Western deterrence. The programme started in the early 1970s, and eventually resulted in the fielding of just fifty Peacekeeper missiles in 1986. After all the argument on different basing systems, these were placed in Minuteman III silos. Peacekeeper had a range of 9,600 km and carried ten W-87 warheads, each with a yield of 300 kT and an accuracy (CEP) of 100 m, giving them an extremely high lethality. During the Cold War these would almost inevitably have been targeted on both Soviet leadership bunkers and ‘superhardened’ ICBM silos.

SOVIET ICBM DEVELOPMENT

The first official rocket-propulsion laboratory in the Soviet Union was opened in 1921, but attention was concentrated on short-range artillery missiles until after the Second World War, when the USSR produced a copy of the German A-4, known under the NATO system as the SS-1, ‘Scud’. The SS-2, ‘Sibling’, was similar, but with Soviet advances to increase range and reliability, while the SS-3, ‘Shyster’, was the first to carry an atomic warhead.

SS-6

In the 1950s the USSR found itself without a strategic bomber force to counter the B-36s, B-47s and B-52s of the USAF, and the quickest way to produce an answer was an ICBM. The technology of the time was, however, comparatively crude: warheads were heavy, and the sum total of the components, the payload and the fuel needed for intercontinental range came to well over 200 tonnes. Nevertheless, the USSR, which was never deterred by the size of a project, pressed ahead to produce the huge SS-6, ‘Sapwood’, which first flew on 3 August 1957. The necessary thrust was obtained by using a basic missile surrounded by four large strap-on boosters, the main missile and each booster having a 102,00 kgf thrust rocket motor. Thus, the device had a launch weight of no less than 300 tonnes, but was powered by motors with a total thrust of 510,000 kgf.

As a strategic weapon the SS-6 was less than successful: it had a poor reaction time, due to the need to load huge quantities of cryogenic fuel, it was far too big to be put in a silo, its electronics were crude and unreliable, and it was very inaccurate, with a CEP of some 8 km. The knowledge that the USSR had such a powerful launch vehicle had a major psychological impact on the USA, but no more than four SS-6s were ever deployed operationally as ICBMs. The SS-6 was, however, used for space launches for many years, since it could lift the heavy weights needed for programmes such as Sputnik, Luna, Vostok, Voshkod, Mars and Venera.

SS-7/SS-8

The first really successful Soviet ICBM was the SS-7, ‘Saddler’, of which 186 were deployed from 1961 until it was withdrawn in 1979 under the terms of SALT I. The SS-7 was the first Soviet missile to enter service using storable liquid fuel. It had two stages giving it a range of some 11,500 km, and was therefore the first Soviet ICBM to pose a realistic threat to the continental USA, although its relative inaccuracy (it had a CEP of 2.8 km) restricted it to counter-value targets.

It was long a feature of Soviet military philosophy that an ambitious programme was backed up by a much less demanding and technically safer system, which in this case was the SS-8, ‘Sasin’. Only twenty-three SS-8s were ever deployed, and they had a limited life from 1965 to 1977.

SS-9/SS-10

The SS-9, ‘Scarp’, was the first of the second generation of Soviet ICBMs: a heavy, silo-based missile which became operational in 1966. Numbers peaked at 313 in 1970, remaining at this level until 1975, when retirements began, the last of the type being withdrawn in 1979. Four versions were known: the first to enter service was Mod 1, which had a 20 MT warhead, while Mod 2, the principal production version, had a 25 MT warhead – by far the most powerful warhead ever to achieve operational status in any country. The Mod 3 was a special version which was used to test the Fractional Orbital Bombardment System (FOBS), which was designed to attack the USA from the south-east; it caused considerable concern in the Pentagon. Mod 4 carried three MRVs, which impacted with the same spread as a typical USAF Minuteman missile complex, although it never actually entered service, the mission being allocated to the SS-11 Mod 3 instead.

The SS-10, ‘Scrag’, was the insurance against the failure of the SS-9. This huge missile, which used cryogenic fuels, was shown at the 1968 Red Square parade but never entered service.

SS-11

The two-stage SS-11, ‘Sego’, used storable liquid propellant and entered service in 1966, eventually serving in three principal variants. Mod 1 had a single 950 kT warhead, Mod 2 had increased range and throw weight, as well as penetration aids and a more accurate warhead, while Mod 3 carried three 200 kT MRVs, the first such system to be fielded by the USSR, with a foot-print virtually identical with that of Minuteman silos. The SS-11 had a long life, with just over half being replaced by the SS-17 and SS-19 in the late 1970s, while the balance of 420 remained until 1987, when they were replaced progressively by the road-mobile SS-25.

SS-13

Developed concurrently with the SS-11, the SS-13, ‘Savage’, was the first solid-fuel Soviet ICBM, and had an unusual construction with three stages linked by open Warren-girder trusses – a configuration matched only by the earlier SS-10. There were claims in the early 1970s that the SS-13 was being used in a mobile role, but these were never substantiated. The USSR claimed that the SS-25 was a modified version of the SS-13 (which was permitted under SALT II), and flew two missiles in 1986 to demonstrate that this was the case to the USA. Only sixty SS-13s entered service, and the production and maintenance of such a small number must have been very expensive. However, it must be assumed that it played a useful role in the Soviet nuclear force, as the SS-13 remained in service from 1972 until past the end of the Cold War.

SS-17

The SS-17, ‘Spanker’, which used storable liquid propellant, was developed in parallel with the SS-19 as a replacement for the SS-11 and was in service from 1975 to 1990. It was the first Soviet ICBM to be launched by using a gas generator to blow the missile out of the silo, with ignition taking place only when the missile was well clear. Known as the ‘cold-launch technique’, this method minimized damage to the silo and enabled it to be reused. This caused considerable alarm in the United States, as it was seen to indicate a plan for a nuclear war lasting several days, if not weeks. The second innovation was that several versions carried MIRVs, the first operational Soviet ICBMs to do so: Mods 1 and 3 carried four 200 kT MIRVs, but the Soviets, as always, hedged their bets, and the SS-17 Mod 2 carried a single 3.6 MT warhead

SS-18

The SS-18, ‘Satan’, the successor to the SS-9, was by far the largest ICBM to be fielded by either of the two superpowers, and its throw weight of 8,800 kg was the greatest of any Cold War missile. Starting in 1975, it was deployed in former SS-9 silos, which were modified and upgraded to take the new missile. Mods 1 and 3 both had a single large 20 MT warhead, while Mods 2 and 4 each had ten 500 kT MIRVs. The SS-18 was described by the USA as ‘extremely accurate’ and ‘designed to attack hard targets, such as US ICBM silos’. Also, according to US sources, the SS-18 force was capable of destroying ‘65–80% of the US ICBM force, using two warheads against each. Even after such an attack, there would still be over 1,000 SS-18 warheads available for further strikes against US targets.’

SS-19

The SS-19, ‘Stiletto’, was developed in parallel to the SS-17 and entered service in 1971, with a peak deployment of 360; it was the most widely used Soviet ICBM of its generation. It was a hot-launch missile, although it was housed in a canister which reduced silo damage. Various versions of the missile were developed, but the service version was the Mod 3, with six 550 kT MIRVs, each with a CEP of 400 m, which, again according to US sources, meant that ‘while less accurate than the SS-18, [it had] significant capability against all but hardened silos. It could also be used against targets in Eurasia.’ It would therefore appear safe to assume that the SS-19 was targeted against counter-force targets, such as reasonably hardened military targets, but not against ICBM silos, which were the task of the SS-18.

SS-24

The SS-24, ‘Scalpel’, was fielded in two launch modes, the Mod 1 being rail-mobile, while Mod 2 was silo-based. The actual missiles in each variant were virtually identical, being ten 500 kT MIRVS with a range of 10,000 km and a CEP of 200 m. Mod 1 was deployed in trains with three launchers each, with three rail garrisons, all in Russia; there were four trains each at Kostromo and Krasnoyarsk and three trains at Bershet. Fifty-six of the silo-launched version (Mod 2) were deployed, split between one site in Russia (ten silos) and one site in the Ukraine (forty-six silos).

SS-25

The SS-25, ‘Sickle’, was the last Soviet ICBM to be fielded during the Cold War. It was a single-warhead missile, carrying one highly accurate 550 kT warhead, and entered service in 1985. At the end of the Cold War 288 missiles were split between nine sites, with further missiles being deployed up to 1994. The missile was road-mobile, but was normally housed in a garage with a sliding roof which could be opened for an emergency launch. Given the necessary warning, however, the fourteen-wheel TELs were deployed to pre-surveyed sites in forests, where they were raised on jacks for stability during launch.

The SS-25 missile was contained in a large cylindrical canister, and the system was reloadable, highly survivable and capable of rapid retargeting. This led US sources to speculate that it was designed for use in a protracted nuclear war as a reserve weapon, when it would ride out the first wave of US attacks on the Soviet nuclear arsenal and then retaliate against surviving targets, which could be selected and set into the warhead at the time. It was during the flight testing of the SS-25 that the Soviets first used encryption on their telemetry down-links, which caused the US to claim that they were acting in contravention of the SALT II agreement.

Cold War Intercontinental Ballistic Missiles II

BASING

The original German A-4 missile employed a brilliantly simple road-mobile system, in which the missile was carried on a four-wheeled trailer known as a Meillerwagen. When the missile was to be launched, the Meillerwagen raised it to the vertical and then lowered it on to a small launch platform. Each site had a crew of 136 men, with many more men and vehicles in the logistics chain.

The Germans also gave active consideration to launching the A-4 missile from a train. According to a 1944 plan, each train would carry six ready-to-use missiles, and include an erector–launcher car, seven fuel-tanker cars, a generator car, a workshop, a spares car and several cars for the crew. On top of this, however, the train would also carry all the vehicles normally associated with a missile battery, in order that the unit could dismount from the train and operate independently of it, which brought the whole battery up to the unwieldy total of seventy to eighty freight cars, probably requiring at least two separate trains. Separate logistic trains were planned to bring further supplies of fuel and missiles. Prototype trains were running before the end of the war, but the system was not a practicable proposition in view of the air supremacy of the Allies, for whom all trains were a high-priority target.

ICBM forces were originally built to threaten the opponent’s civil population, which in itself was not a difficult task: the warheads were relatively inaccurate, but the cities were large and the warheads powerful. It was obviously highly desirable, from both political and military viewpoints, to defend the population from this threat, in the same way that bombers had been opposed by a mixture of fighters and anti-aircraft guns during the recent war. It was not feasible at the time to intercept incoming ICBMs, so the only defence was to attack the ICBMs at their source, which could be done only by conducting a pre-emptive strike with other ICBMs. Thus the position was rapidly reached where the ICBMs’ principal target was the other side’s ICBMs, moving on to other missions only when that first battle had been decided. It was therefore necessary to optimize the attacking potential of one’s own missiles while ensuring their survivability in the face of an opponent’s first strike. There were four possibilities:

• superhardened silos, which would withstand even the most powerful incoming warhead;

• using a greater number of silos than missiles, so that the opponent would waste warheads on empty silos;

• making the missiles mobile, as the Germans did, so that the enemy could not locate them;

• using anti-ballistic-missile (ABM) defences.

The essence of the problem can be illustrated by a simplified example in which the aggressor (A) has 100 ICBMs, each with ten warheads, while the other side (B) has 500 ICBMs, each with three warheads. (For the purpose of this example, all missiles and warheads are perfectly available and reliable, and each warhead will kill one silo.) Thus A is capable of destroying 1,000 silos, and if he carries out a pre-emptive strike he requires to use only fifty missiles, leaving B with no missiles. A still has fifty missiles and is clearly the winner. If, however, B builds another 500 silos, but no more missiles, and spreads his 500 ICBMs randomly among the 1,000 silos, A, not knowing which silos are occupied, must attack all 1,000. Both sides then end up with zero ICBMs, which is a better outcome for B than the first, but is unsatisfactory from a military point of view. But if B now builds a total of 2,000 silos, half his missiles (i.e. 250) must survive the attack.

Silos

The first missiles, such as the early Atlas and Thor, were located in a shed, primarily for protection from the weather, and were taken out to enable them to be raised to the vertical for fuelling and launch. The missiles were also located close to each other. Both factors together made the missiles extremely vulnerable to incoming missiles, which did not need to be too accurate to achieve a kill.

The next step was to place the missiles in semi-hardened shelters and to separate these shelters so that one incoming warhead could not destroy more than one missile. In addition, the shelters had split roofs, so that the missile could be raised, fuelled and launched without wasting time moving it out on to a launch pad. As the perception of the threat increased, the spacing between individual missiles increased yet further and the shelters became bunkers, recessed into the ground.

The next step was to mount the missile vertically rather than horizontally, and to put it in a hole in the ground. The USAF, however, adopted a ‘halfway’ system with the Atlas and Titan I missiles, in which the missile stood upright in a silo which, in the case of Atlas, was some 53 m deep and 16 m in diameter, resting on the launch platform, which was counterbalanced by a 150 tonne weight. The launch procedure involved fuelling the missile in the silo and then using hydraulic rams to raise the entire launch platform and missile to the surface, where the missile was then launched. Titan I had a super-fast fuelling system and a high-speed elevator which reduced reaction time to approximately twenty minutes, while the silo and all associated facilities were hardened to withstand an overpressure of 20 kgf/cm2.

A completely new launch system was introduced with Titan II, in which the missile was launched direct from the silo. There was, however, considerable concern about the effects of the rocket efflux on the missile during the few seconds that the missile was still inside the silo, so the missile rested on a large flame deflector, which directed the efflux into two large ducts exhausting to the atmosphere a short distance from the silo. Each missile complex was 45 m deep and 17 m wide and occupied nine levels, which housed electrical power, air conditioning, ventilation, and environmental protection, as well as hazard sensors and the associated corrective devices. At the centre was the launch duct, in which the missile was suspended in an environmentally controlled atmosphere. A walkway extended from the missile silo to a blast lock which provided controlled access between the silo and the tunnels leading upward to the above-ground access and laterally to the launch-control centre (LCC). The LCC was a three-level, shock-isolated cage suspended from a reinforced-concrete dome and housed two officers and two enlisted men. As with the Titan I silo, the Titan II silo was hardened to 20 kgf/cm2.

When it learned that the Soviets were launching direct from the silo, the USAF followed suit and the Minuteman I missile became the first US missile to use the ‘hot launch’, in which the missile rose from the silo surrounded by the flames and smoke from the rocket motor. The next Soviet innovation was the ‘cold launch’, in which a gas generator within the silo produced a pressure sufficient to propel the missile some 20–30 m clear of the silo before its first-stage motor fired. This protected the silo from damage, enabling it to be reused within a fairly short space of time. It was used by the Soviets from the SS-17 onwards, and by the USAF in Peacekeeper (MX).

Following their introduction in the mid-1960s, underground silos became increasingly complicated and expensive structures. Ideally they were located at a relatively high altitude, to improve the missiles’ range, and in springy ground, to absorb as much as possible of the shock waves from incoming warheads. The silo was a vertical, steel/reinforced-concrete tube, housing an elaborate suspension and shock-isolation system which supported the missile as well as providing further insulation to minimize the transfer of shock motion from the walls and floor of the silo to the missile. The top third of the silo housed maintenance and launch facilities, which were known as the ‘head works’ in USAF parlance. Finally, the missile tube was capped by a massive sliding door, which provided protection against overpressure by transmitting the shock caused by the explosion of an incoming warhead to the cover supports rather than to the vertical tube containing the missile; it also provided protection against radiation and EMP effects. The door was designed to sweep the area as it opened, to prevent debris falling into the silo tube and possibly interfering with the launch process.

Individual silos were grouped together for control purposes, but were sited sufficiently far apart to ensure that one incoming warhead could not destroy more than one missile. Control was exercised by an underground command centre, manned by a small crew of watchkeepers, whose functions included operating the dual-key safety system in which launch could be authorized only by two officers acting independently. This command centre was linked to its superior headquarters and to the individual silos under its control by telecommunications and by systems-monitoring links. This introduced a further problem: the vulnerability of these links to blast and, in particular, to electromagnetic pulses (EMP). Making these links survivable against the perceived threats (known as ‘nuclear hardening’) became an increasingly complex and expensive undertaking as the Cold War progressed.

The protection factor (‘hardness’) of a silo was measured by its ability to withstand the overpressure resulting from the blast effects of a nuclear explosion, and was expressed in kilograms-force per square centimetre (kgf/cm2) or pounds per square inch (psi) (1 kgf/cm2≈14.2 psi). In the USA, the Atlas, Titan I and Titan II silos were constructed with a hardness of 20 kgf/cm2 (300 psi), while the Minuteman I silos (mid-1960s) were built with a hardness of some 85 kgf/cm2 (1,200 psi). Finally, in the 1970s, Minuteman III/Peacekeeper silos were built with a hardness of 140 kgf/cm2 (2,000 psi). By this time, however, the silos were so expensive that, despite reports that the Soviets were ‘superhardening’ their silos to resist overpressures of 425 kgf/cm2 (6,000 psi), Congress repeatedly refused to authorize any further hardening of US silos.

The Soviet programme of silo building, refurbishment and hardening was more successful. The earliest silos, built before 1969, were hardened to withstand an overpressure of some 7 kgf/cm2 (100 psi), with the next generation built to 20 kgf/cm2 (300 psi). Those built in the early 1970s for the SS-18 could withstand 425 kgf/cm2 (6,000 psi), which was achieved using concrete reinforced by concentric steel rings.

Alternative Basing Schemes

Although most of their ICBMs were always sited in silos, both the USA and the USSR repeatedly examined alternatives, both to increase survivability and, perhaps of greater importance in the USA than in the USSR, to reduce costs. In the USA, environmental factors also became an increasingly important consideration.

One of the US schemes was called Multiple Protective Structures (MPS) and consisted of a number of ‘racetracks’, each about 45 km in circumference and equipped with twenty-three hardened shelters. One mobile ICBM, mounted on a large wheeled TEL, would have moved around each racetrack at night in a random fashion, with decoy TELs and missiles adding to the adversary’s uncertainties. Basic MPS involved 200 missiles moving between 4,600 shelters covering an area of some 12,800 km2, but a more grandiose version envisaged 300 missiles moving around 8,500 shelters.

An enhanced version of MPS was proposed in the early 1980s, in which a new Small ICBM (SICBM) would have been deployed in fixed, hardened silos distributed randomly among the 200 racetracks of the MPS system, thus adding to the aiming points for the Soviet ICBM force. It was intended that the SICBM would be 11.6 m long and weigh 9,980 kg, have a range of 12,000 km, and carry a single 500 kT warhead; it would have been launched by an airborne launch-control centre. SICBM would have been housed in a tight-fitting container placed in a vertical silo hardened to approximately 530 kgf/cm2, and it would have required an exceptionally accurate incoming warhead to destroy such a target. Various other launch methods were also considered for SICBM, including a road vehicle, normal silos, airborne launch from a transport aircraft, and (possibly the only time this was ever considered for an ICBM) from a helicopter.

Another scheme was based on the racetrack principle of MPS, but this time with the TELs running inside shallow tunnels, 4 m in diameter. The TELs would simply have kept moving, thus avoiding the need for shelters, and would have had large plugs fore and aft to protect against nuclear blast within the tunnel. If required to launch, the TEL would have halted and used hydraulic jacks to drive the armoured roof upwards, breaking through the surface until the missile was raised to the vertical.

Deep Basing (DB) involved placing the ICBMs either singly or in groups deep underground, where they would ride out an attack and then emerge to carry out a retaliatory strike. One of the major DB schemes was the ‘mesa concept’, in which the missiles, crews and equipment were to be placed in interconnecting tunnels some 760–915 m deep under a mesa or similar geological formation. Following an enemy nuclear strike, the crews would have used special machines to dig a tunnel to the surface and then brought the launcher to the open to initiate a retaliatory strike. This scheme’s disadvantage lay in its poor reaction time and the difficulty it posed for arms-control verification. From the practical point of view it would have been necessary to find rock which was both fault-free and sufficiently strong to resist a Soviet nuclear attack, but which could nevertheless be drilled through in an acceptable time and without the machinery becoming jammed by debris. On top of all that, a second incoming nuclear strike when the drilling machine was near to the surface would have caused irreparable damage. A related project (Project Brimstone) examined existing deep mines, but also proved unworkable.

A totally different approach, known as Closely Based Spacing or ‘Dense Pack’, was also considered. This suggested that, instead of spacing missile silos sufficiently far apart to ensure that not more than one could be destroyed by one incoming warhead, 100 MX missiles should be sited in superhardened silos placed deliberately close together. The idea was that this would take advantage of the ‘fratricide’ effect in which incoming warheads would be deflected or destroyed by the nuclear explosions of the previous warheads. A spacing of the order of 550 m was suggested, and it was claimed that in such a scheme between 50 and 70 per cent of the ICBMs would have survived.

Mobile basing

All the basing methods discussed above were either static or involved limited movement in a closed circuit, but the question of mobile basing was often considered as well. As described earlier, the German A-4 was designed as a road-mobile system, but an alternative rail-based option was also considered, and a similar scheme was designed and tested during the development phase of the Minuteman I. The plan was to have fifty trains, each of some fourteen vehicles, which would have included up to five TEL cars, each carrying a single missile, together with command-and-control, living-accommodation, and power facilities. The scheme was examined in great detail, and a prototype ‘Mobile Minuteman’ train was tested on the public railway. Although the scheme proved feasible, it was dropped in favour of silo deployment.

A similar proposal was considered during the long development of the Peacekeeper (MX) system, and very nearly became operational. This version would have consisted of twenty-five missile trains, each carrying two missiles. Each train would have consisted of the locomotive and six cars: two missile launch cars; a launch-control car, a maintenance car, and two security cars. In peacetime the trains would have been located in a ‘rail garrison’ sited on an existing Strategic Air Command base, which would have contained four or five shelters (known as ‘igloos’), each housing one train. These garrisons would each have covered an area of some 18–20 hectares, with tracks leading to the USA’s 240,000 km national rail network. On receipt of strategic warning the trains would have deployed on to this national network, where they would have rapidly attained a high degree of survivability. This scheme was under active development from 1989 until its cancellation in 1991.

As we have seen, the Soviet SS-24 Mod 1 was actually fielded in the rail-mobile mode. There were three rail garrisons, all in Russia, with four trains at two sites and three trains at the third. The trains had one launcher each, with two further cars for launch control, maintenance, and power supply.

The Soviets also fielded a road-mobile ICBM, the SS-25, which was also the last Soviet ICBM to enter service during the Cold War. This single-warhead missile was carried on a fourteen-wheeled TEL, which was raised on jacks for stability during the launch. The TEL and its missile were normally housed in a garage with a sliding roof which would be opened for an emergency launch. Given the necessary warning, however, the TELs deployed to pre-surveyed sites in forests.

One US proposal was the ‘continuous patrol aircraft’, in which a packaged missile was carried inside a large, fuel-efficient aircraft. On receipt of verified launch instructions, the missile would have been extracted by a drogue parachute, and once it was descending vertically its engine would have fired automatically, enabling the missile to climb away on a normal trajectory. Tests were carried out using a Minuteman I missile transported by a C-5 Galaxy and were completely successful. Large numbers of aircraft would have been needed to maintain the number required on simultaneous patrol. It would have been very difficult for a potential enemy to track them and even more difficult to guarantee the destruction of every airborne aircraft in a pre-emptive strike, but the main weaknesses of the scheme were the vulnerability of the airfields, the enormous operating costs, and, to a lesser degree, the decreased accuracy of the missile.

Electronic Warfare (Measure-Counter Measure)

This Me110G-4 mounts the huge ‘stag-antler’ antennae of the metre-wave Lichtenstein SN-2. However, the small aerial cluster in the centre of the nose gives away that this aircraft is equipped with the SN-2b radar. These early SN-2 sets had a very large minimum range, and so a second radar set from a Lichtenstein C-1 was installed to cover this dead zone. The resulting installation was draggy and caused a loss of aircraft performance. Worse still, the Funker (or radar operator) was forced to use two separate radar sets simultaneously. The later SN-2c had a short enough minimum range that it could drop the C-1 set.

In 1937 a British research scientist, R.V. Jones, first noticed that a strip of aluminium foil drifting through the air produces a blip on radar screens. 2,000 of these strips, 11/2 ft high and 1 inch wide, shows up on radar screens very similar to a British heavy bomber. In the summer of 1941 a RAF Wellington bomber fitted with special radio aerials (antenna whips) found it received a lot of attention by German gunners even when it was flying among other planes. It was deduced that the radio aerials created a larger radar echo of the bomber than would be the case otherwise, thereby drawing more enemy fire. During the next air raid this bomber participated in, against Benghazi in Libya, the crew dropped packets of aluminium strips 18 inches long and 1.5 inches wide (the size of the special aerials) but no change was observed. One more time the aluminium strips were used but with no useful results the whole idea was abandoned.

A year later and after many experiments it was found that a bundle of 240 aluminium strips produced a radar echo similar to a RAF Blenheim bomber. Ten such bundles released over a mile made it nearly impossible for radar to pick out the real bomber‘s echo. First used operationally in a series of four major raids by bomber Command against Hamburg July 24 – August 2, 1943. On the first night alone 92 million strips of Window were released (about 40 tons) from 746 RAF bombers. The German radar operators were totally confused because the Window cloud showed up on their screens as thousands of aircraft. Accurate pinpointing of the invading bombers was therefore impossible so that, on that night, the German night defences were rendered completely ineffective. The new weapon spelled the end of the Himmelbett system because, with both the flak and night fighter arms denied radar back-up they had to rely upon random pick-ups by the similarly handicapped searchlights successfully confusing the German defenses. From this point forward Window was used on every air raid and in many of the numerous spoofing raids to deceive the German defenses.

Window was aluminium foil stiffened with a black paper backing and cut into strips 30 cm long and 1.5 cm in width. The silvery side was coated with lampblack, a fine soot collected from incompletely burned creosote or kerosene, so the clouds of Window would not show up in the glare of search lights. At first window was released from the aircraft by any convenient opening but soon each Lancaster was fitted with a louvered box structure on the lower starboard side of the nose. From here Window was released by the Flight Engineer or the bombs Aimer.

The AAF called window “chaff” because of the way it resembled wheat chaff in the wind. First use by 8th Air Force was in December of 1943 and for the 15th Air Force March of 1944.

By 1944 every AAF bomber in the lead wing carried 144 packages of chaff. These were dropped at four second intervals, so each plane could lay a chaff lane or “corridor” 20 miles long.

Mandrel

The Germans’ Freya early warning radars were reinforced by the introduction of the Wassermann (Aquarius) and Mammut (Mammoth) long distance radars which could plot bombers above the radar ‘horizon’ as far north as Norfolk and Suffolk, but these could be jammed by the British airborne Mandrel device.

To counter Freya, the British used equipment called ‘Moonshine’. Carried by Boulton Paul Defiant aircraft of the Special Duties Flight (later No. 515 Squadron RAF), a single set retransmitted a portion of the Freya signal amplifying the apparent return. Eight planes with ‘Moonshine’ could mimic a force of 100 bombers

Bombers could also detect when they were being monitored by the German Würzburg gun-laying radar with a device called Boozer.

Measure Counter-Measure

June 1940Wurzburg25 mile range radar, mainly used in AAA aiming could judge altitude
Sept 1940Freya75 mile range radar for early warning of approaching aircraft, could not judge altitude
October 40WurzburgUsed in pairs. One tracks the bomber, the other an interceptor which is vectored to the target
Sept 1941Wurzburg RieseGiant version of Wurzburg, range over 40 miles
Feb 1942Lichtenstein RadarAirborne radar range 2 miles to 200 yards
March 42MammutImproved Freya early warning radar, 200 mile range. cant measure altitude
March 42WassermannEarly warning radar 150 mile range, could judge altitude
March 42GeeNavigation aid for British bombers accuracy to within 6 mi at 400 mile range
JuneShakerLead bomber aircraft blind dropped marker bombs with Gee for following bombers
AugustMoonshineDevice that greatly amplified Freya radar pulses, giving impression of larger force. Used in Defiants. Used until October 1942
AugustHeinrichTransmitters that jammed the Gee ground transmitters. Gee became unusable by Nov 42
NovMandrelNoise based jammer, mainly against Freya radar put in planes flying ahead of and with bombers
NovTinselDevice which Amplified engine noise to disrupt ground-to-air communications
DecOboe270 mi range accurate blind bombing system used in pathfinders. US bombers first use 10/43
Jan 1943H2SPowerful centrimetric radar which gave rough representation of terrain on CRT screen in bomber. Water vs. land is easily seen. Also cities vs. open countryside. Not fully effective till June 1943. First American use Nov 1943
MarchMonicaRadar transmitter in tail of bombers range 1000 yards. No IFF. to give warning of approaching
MarchBoozerRadar receiver which gave visual warning if aircraft detected by Wursburg/Lichtenstein radar
JuneA1 Mk XCentriimetric airborne radar. British version of U.S. SCR720
JuneSerrateRadar receiver for nightfighter which picked up Lichtenstein giving visual cues of height and direction
JulyWindowBundles of metal foil, confuse Wurzburg/Lichtenstein radars
AugSpecial TinselUpdate to Tinsel to jam new High Frequency German transmitters
SeptemberNaxburgModified Wurzburg which could pick up H2S signals. Could pick up individual aircraft 150 mi
OctABC‘Airborne Cigar’ airborne transmitter to jam German fighters VHF radios\par
OctCoronaSpecial Tinsel transmitters to send out false instructions to German fighter pilots
OctSN-2Airborne radar impervious to Window. range 4 mile to 400 yards
NovWurzlausModification to Wurzburg which under favorable conditions could differentiate between moving bombers and relatively motionless chaff (Window)
NovNurnbergModification to Wurzburg allowing skilled operator to distinguish between pulse from bomber or a pulse from chaff
NovFlensburgAirborne receiver which picked up Monica transmissions
DecemberDartboardJamming of Stuttgart radio station which was giving musical coded instructions to nightfighters
Jan 1944DrumstickGround transmitters sending meaningless Morse code signals to disrupt German Morse ground to air
JanOboeOboe converted to centrimetric wavelength
JanNaxosAirborne receiver which picked up HsS signals
April 44JagdschlossGround radar range 90 miles working on four separate frequencies
AprilEgonGround to air fighter guidance system. Range 125 mi
AugustJostle IVImproved ABC blots out whole range of frequencies at a time instead of one at a time
SeptemberWindowWindow made to block SN-2 radar
OctSerrate IVModified Serrate to home in on SN-2 signals
OctPerfectosAirborne transmitter-receiver which activated German IFF set showing the fighters transmissions a direction and distance to to plane could be found
OctPiperackAirborne transmitter to jam SN-2
DecMicro-HAmerican alternative to Gee should later ever be jammed