The Mongol Wars and the Evolution of the Gun, 1211–1279

The rise of the Mongols was a key event in the evolution of gunpowder technology. Their wars drove military developments in East Asia and spread gunpowder technology westward. You’d think this statement would be uncontroversial. After all, the Mongols created the world’s largest empire, connecting East Asia to South Asia, Western Asia, the Middle East, and Eastern Europe. Mongol commanders excelled at incorporating foreign experts into their forces, and Chinese artisans of all kinds followed Mongol armies far from home. Yet oddly, experts disagree about the extent to which—or even whether—Mongols used gunpowder weapons in their warfare, and some deny them a role in the dissemination of the technology.

How can there be disagreement about such a fundamental question? One reason is that most historians have a poor understanding about what early gunpowder weapons were like and what they were used for—they expect to find gunpowder weapons blasting down stone walls, as cannons would eventually come to do in the West. As we’ve seen, that’s not how gunpowder weapons worked in this period. Even the iron bombs of the Jin—the most powerful gunpowder weapons yet invented—were used not to batter walls but to kill people or, at most, to help destroy wooden structures. Moreover, at the time of the Mongol Wars, the most common gunpowder weapon was still the gunpowder arrow, used primarily as an incendiary. There’s no shortage of accounts referring to blazing arrows and fiery orbs hurled by Mongol catapults, but historians have argued that these were not gunpowder weapons on the grounds that gunpowder weapons would have attracted much more attention.

Another problem is that the Mongols left few historical documents to posterity. Even the records left by the Mongols’ regime in China—the Yuan dynasty—are fragmentary, and China is a place that takes its history seriously. The official History of the Yuan Dynasty, compiled by scholars in China after the fall of the Yuan in 1368, is sloppy and patchy compared to other official histories in the Chinese canon, and Sinologists have noted that Yuan documents are particularly reticent about military details. Scholars must piece together Mongols’ history from the sources of their beleaguered enemies, whose records tended not to survive burning cities. So although we can paint a fairly clear picture of the development of firearms technology during the Song-Jin Wars, our understanding of the more intense and catalytic Mongol Wars is less complete.

Even so, there seems to be little doubt that the Mongols were proficient in gunpowder weapons. No one fighting in the Chinese context—and the Mongols met their most determined resistance in the Chinese realm—could remain unconvinced about the power of gunpowder, which by the early 1200s had come to play an essential role in warfare.

Indeed, the Mongols had a chance to learn about gunpowder weapons from the masters of their use, the Jin dynasty.

The Mongol-Jin Wars

Genghis Khan launched his first concerted invasion of the Jin in 1211, and it wasn’t long before the Mongols were deploying gunpowder weapons themselves, for example in 1232, when they besieged the Jin capital of Kaifeng. By this point they understood that sieges required careful preparation, and they built a hundred kilometers of stockades around the city, stout and elaborate ones, equipped with watchtowers, trenches, and guardhouses, forcing Chinese captives—men, women, and children—to haul supplies and fill in moats. Then they began launching gunpowder bombs. Jin scholar Liu Qi recalled in a mournful memoir, how “the attack against the city walls grew increasingly intense, and bombs rained down as [the enemy] advanced.”

The Jin responded in kind. “From within the walls,” Liu Qi writes, “the defenders responded with a gunpowder bomb called the heaven-shaking-thunder bomb. Whenever the [Mongol] troops encountered one, several men at a time would be turned into ashes.” The official Jin History contains a clear description of the weapon: “The heaven-shaking-thunder bomb is an iron vessel filled with gunpowder. When lighted with fire and shot off, it goes off like a crash of thunder that can be heard for a hundred li [thirty miles], burning an expanse of land more than half a mu, a mu is a sixth of an acre], and the fire can even penetrate iron armor.” Three centuries later, a Ming official named He Mengchun ( 1474–1536) found an old cache of them in the Xi’an area: “When I went on official business to Shaanxi Province, I saw on top of Xi’an’s city walls an old stockpile of iron bombs. They were called ‘heaven-shaking-thunder’ bombs, and they were like an enclosed rice bowl with a hole at the top, just big enough to put your finger in. The troops said they hadn’t been used for a very long time.” Possibly he saw the bombs in action, because he wrote, “When the powder goes off, the bomb rips open, and the iron pieces fly in all directions. That is how it is able to kill people and horses from far away.”

Heaven-shaking-thunder bombs seem to have first appeared in 1231 (the year before the Mongol Siege of Kaifeng) when a Jin general had used them to destroy a Mongol warship. But it was during the Siege of Kaifeng of 1232 that they saw their most intense use. The Mongols tried to protect themselves by constructing elaborate screens of thick leather, which they used to cover workers who were undermining the city walls. In this way the workers managed to get right up to the walls, where they began excavating protective niches. Jin defenders found this exceedingly worrisome, so according to the official Jin History they “took iron cords and attached them to heaven-shaking-thunder bombs. The bombs were lowered down the walls, and when they reached the place where the miners were working, the [bombs were set off] and the excavators and their leather screens were together blown up, obliterated without a trace.”

The Jin defenders also deployed other gunpowder weapons, including a new and improved version of the fire lance, called the flying fire lance. This version seems to have been more effective than the one used by Chen Gui a century before. The official Jin History contains an unusually detailed description:

To make the lance, use chi-huang paper, sixteen layers of it for the tube, and make it a bit longer than two feet. Stuff it with willow charcoal, iron fragments, magnet ends, sulfur, white arsenic [probably an error that should mean saltpeter], and other ingredients, and put a fuse to the end. Each troop has hanging on him a little iron pot to keep fire [probably hot coals], and when it’s time to do battle, the flames shoot out the front of the lance more than ten feet, and when the gunpowder is depleted, the tube isn’t destroyed.

When wielded and set alight, it was fearsome weapon: “no one dared go near.” Apparently Mongol soldiers, although disdainful of most Jin weapons, greatly feared the flying fire lance and the heaven-shaking-thunder bomb.

Kaifeng held out for a year, during which hundreds of thousands died of starvation, but ultimately it capitulated. The Jin emperor fled. Many hoped the Jin might reconstitute the dynasty elsewhere, and here and there Jin troops still scored successes, as when a Jin commander led four hundred fifty fire lance troops against a Mongol encampment: “They couldn’t stand up against this and were completely routed, and three thousand five hundred were drowned.” But these isolated victories couldn’t break Mongol momentum, especially after the Jin emperor committed suicide in 1234. Although some Jin troops—many of them Chinese—continued to resist (one loyalist gathered all the metal that could be found in the city he was defending, even gold and silver, and made explosive shells to lob against the Mongols), the Jin were finished. The Mongols had conquered two of the three great states of the Song Warring States Period, the Xi Xia and the Jin. Now they turned to the Song.

The Song-Mongol Wars

It’s striking that the Song, this supposedly weak dynasty, held the Mongols off for forty-five years. As an eminent Sinologist wrote more than sixty years ago, “unquestionably in the Chinese the Mongols encountered more stubborn opposition and better defense than any of their other opponents in Europe and Asia.”

Gunpowder weapons were central to the fighting. In 1237, for example, a Mongol army attacked the Song city of Anfeng, “using gunpowder bombs [huo pao] to burn the [defensive] towers.” (Anfeng is modern-day Shouxian, in Anhui Province.) “Several hundred men hurled one bomb, and if it hit the tower it would immediately smash it to pieces.” The Song defending commander, Du Gao, fought back resourcefully, rebuilding towers, equipping his archers with special small arrows to shoot through the eye slits of Mongol’s thick armor (normal arrows were too thick), and, most important, deploying powerful gunpowder weapons, such as a bomb called the “Elipao,” named after a famous local pear. He prevailed. The Mongols withdrew, suffering heavy casualties.

Gunpowder technology evolved quickly, and although sources are sketchy, scattered references to arsenals show that gunpowder weapons were considered central to the war effort. For example, in 1257, a Song official named Li Zengbo was ordered to inspect border cities’ arsenals. He believed that a city should have several hundred thousand iron bombshells, and a good production facility should produce at least a couple thousand a month. But his tour was disheartening. He wrote that in one arsenal he found “no more than 85 iron bomb-shells, large and small, 95 fire-arrows, and 105 fire-lances. This is not sufficient for a mere hundred men, let alone a thousand, to use against an attack by the … barbarians. The government supposedly wants to make preparations for the defense of its fortified cities, and to furnish them with military supplies against the enemy (yet this is all they give us). What chilling indifference!”

Fortunately, the Mongol advance paused after the great khan died in 1259. When it resumed in 1268, fighting was extremely intense, and gunpowder weapons played significant roles. Blocking the Mongols’ advance were the twin fortress cities of Xiangyang and Fancheng, which guarded the passage southward to the Yangze River. The Mongol investment of these cities was one of the longest sieges of world history, lasting from 1268 to 1273. The details are too numerous to examine here, but two episodes are salient, each of which involved a pair of heroes.

The first was a bold relief mission carried out by the so-called Two Zhangs. For the first three years of the siege, the Song had been able to receive food, clothing, and reinforcements by water, but in late 1271 the Mongols had tightened their blockade, and the inhabitants had become desperate. Two men surnamed Zhang determined to run the blockade and take supplies to the cities. With a hundred paddle wheel boats they traveled toward the twin cities, moving by night when possible, red lanterns helping them recognize each other in the darkness. But a commander on the Mongol side learned about their plans and prepared a trap. As they approached the cities they found his “vessels spread out, filling the entire surface of the river, and there was no gap for them to enter.” Thick iron chains stretched across the water.

According to the official Song History, the two Zhangs had armed their boats with “fire-lances, fire-bombs, glowing charcoal, huge axes, and powerful crossbows.” Their flotilla opened fire, and, according to a source recorded from the Mongol side, “bomb-shells were hurled with great noise and loud reports.”30 Wang Zhaochun suggests that the fire bombs used on the two Zhangs’ boats were not hurled by catapults but were shot off like rockets, using the fiery coals the vessels carried. This would be exciting, but unfortunately the evidence is inconclusive. Historian Stephen Haw suggests that the vessels carried guns, which is also possible, but again the evidence is inconclusive.

In any case, the fight was brutal and long. The Zhangs’ soldiers had been told that “this voyage promises only death,” and many indeed died as they tried to cut through chains, pull up stakes, hurl bombs. A source from the Mongol side notes that “on their ships they were up to the ankles in blood.” But around dawn, the Zhangs’ vessels made it to the city walls. The citizens “leapt up a hundred times in joy.” When the men from the boats were mustered on shore, one Zhang was missing. His fate remains a mystery. The official Yuan History says one Zhang was captured alive. The official Song History has a more interesting story. A few days after the battle, it says, “a corpse came floating upstream, covered in armor and gripping a bow-and-arrow.… It was Zhang Shun, his body pierced by four lances and six arrows. The expression of anger [on his face] was so vigorous it was as though he were still alive. The troops were surprised and thought it miraculous, and they made a grave and prepared the body for burial, erected a temple, and made sacrifices.” Other sources suggest that Zhang Shun was indeed killed in battle. He was later immortalized in the famous novel The Water Margin.

Alas, the supplies didn’t save Xiangyang, because the Mongols had a pair of heroes of their own. Two Muslim artillery specialists—one from Persia and one from Syria—helped construct counterweight trebuchets whose advanced design allowed larger missiles to be hurled farther. They came to be known in China as “Muslim catapults” or “Xiangyang catapults,” and they were devastating. As one account notes, “when the machinery went off the noise shook heaven and earth; every thing that [the missile] hit was broken and destroyed.” Xiangyang’s tall drum tower, for example, was destroyed in one thundering crash. Did these trebuchets hurl explosive shells? There’s no conclusive evidence, but it would be surprising if they didn’t, since, as we’ve seen, bombs hurled by catapults had been a core component of siege warfare for a century or more. In any case, Xiangyang surrendered in 1273.

The Mongols moved south. A famous Mongol general named Bayan led the campaign, commanding an army of two hundred thousand, most of whom were Chinese. It was probably the largest army the Mongols had commanded, and gunpowder weapons were key arms. In the 1274 Siege of Shayang, for example, Bayan, having failed to storm the walls, waited for the wind to blow from the north and then ordered his artillerists to attack with molten metal bombs. With each strike, “the buildings were burned up and the smoke and flames rose up to heaven. What kind of bomb was this? The sources on the Battle of Shayang don’t provide details, but earlier references suggest that it was a type of gunpowder bomb. A reference to it appears in an account of a battle of 1129, when Song general Li Yanxian was defending a strategic pass against Jin troops. At one point, the Jin attacked the walls day and night with all manner of siege carts, fire carts, sky bridges, and so on, and General Li “resisted at each occasion, and also used molten metal bombs. Wherever the gunpowder touched, everything would disintegrate without a trace.” The molten metal bomb was a probably a catapult projectile that contained gunpowder and molten metal, a frightening combination. It didn’t work for General Li in 1129: he lost the battle and either committed suicide or was killed, depending on which account you believe, but it did work for Bayan in 1274. He captured Shayang and massacred the inhabitants.

Gunpowder bombs were also present at a more famous Mongol massacre, the Siege of Changzhou of 1275, the last major battle of the Mongol-Song Wars.46 Bayan arrived there with his army and informed the inhabitants that “if you … resist us … we shall drain your carcasses of blood and use them for pillows.” His warnings were ignored. His troops bombarded the town day and night with fire bombs and then stormed the walls and began slaughtering people. Perhaps a quarter million were killed. Did his troops get new pillows? Sources don’t say, but it seems that a huge earthen mound filled with dead bodies lasted for centuries. Bones from the massacre were still being discovered into the twentieth century.

The Song held out for another four years, often with mortal bravery, sometimes even blowing themselves up to avoid capture, as when, in 1276, a Song garrison managed to hold the city of Jingjiang in Guangxi Province against a much larger Mongol force for three months before the enemy stormed the walls. Two hundred fifty defenders held a redoubt until it was hopeless and then, instead of surrendering, set off a huge iron bomb. According to the official Song History, “the noise was like a tremendous thunderclap, shaking the walls and ground, and the smoke filled up the heavens outside. Many of troops [outside] were startled to death. When the fire was extinguished they went in to see. There were just ashes, not a trace left.”

Bombs like this one were the most significant gunpowder weapons in the Song-Mongol Wars, but in retrospect the most important development was the birth of the gun.

The Gun

What is a gun? The efficiency of a projectile-propelling firearm is directly related to how much of the expanding gas from the gunpowder reaction can get past the projectile. The technical term is “windage,” and less windage means more energy imparted to the projectile. A true gun therefore has a bullet that fits the barrel. During the Jin-Song Wars, fire lances were loaded with bits of shrapnel, such as ceramics and iron. Since they didn’t occlude the barrel, Joseph Needham calls them “coviatives”: they were simply swept along in the discharge. Although they could do damage, their accuracy, range, and power were relatively low.

In the late 1100s and the 1200s, the fire lance proliferated into a baffling array of weapons that spewed sparks and flames and ceramics and anything else people thought to put in them. This Cambrian Explosion of forms is similar to that found in the early gunpowder period itself—the fire birds, rolling rocket logs, and so on—and a famous military manual known as the Book of the Fire Dragon, compiled in the Ming period but partially written in the late 1200s, describes and illustrates many of these weapons, which historians have called, as a general category, “eruptors.”

These eruptors had fantastic names. The “filling-the-sky erupting tube” spewed out poisonous gas and fragments of porcelain. The “orifice-penetrating flying sand magic mist tube” spewed forth sand and poisonous chemicals, apparently into orifices. The “phalanx-charging fire gourd” shot out lead pellets and laid waste to enemy battle formations. We find these and other weapons jumbled together in the Book of the Fire Dragon, which makes it difficult to determine when they emerged and how they were used. But unfortunately, we must use whatever sources we can find, because starting in the Song-Mongol Wars, our documentary record becomes sparse, and it remains so through the Mongol period that followed, whose leaders, as I’ve noted, left unusually poor documentation relative to other Chinese dynasties.

It is clear that fire lances became common during the Mongol-Song Wars. In 1257, a production report for an arsenal in Jiankang Prefecture refers to the manufacture of 333 “fire-emitting tubes”, and two years later the Song History refers to the production of something quite similar, a “fire-emitting lance”, which emitted more than just fire: “It is made from a large bamboo tube, and inside is stuffed a pellet wad. Once the fire goes off it completely spews the rear pellet wad forth, and the sound is like a bomb that can be heard for five hundred or more paces.” Some consider this “pellet wad” to be the first true bullet in recorded history, because although the pellets themselves probably did not occlude the barrel, the wad did.

Yet a truly effective gun must be made of something stronger than bamboo. Traditionally, historians have argued that metal guns emerged after the Mongols defeated the Song and founded the Yuan dynasty in 1279. Researcher Liu Xu, for instance, writes, “It was the Yuan who completed the transition from the bamboo- (or wood- or paper-) barreled firearm to the metal-barreled firearm, and the first firearms in history appeared in China in the very earliest part of the Yuan.” Similarly, other scholars, including Joseph Needham, have suggested a date of around 1280.

Archaeological evidence tends to corroborate this view. Take, for instance, the Xanadu gun, so named because it was found in the ruins of Xanadu, the Mongol summer palace in Inner Mongolia. It is at present the oldest extant gun whose dating is unequivocal, corresponding to 1298. Like all early guns, it is small: just over six kilograms, thirty-five centimeters long. Archaeological context and the straightforward inscription leave little room for controversy about the dating, but it was certainly not the first of its kind. The inscription includes a serial number and other manufacturing information that together indicate that gun manufacture had already been codified and systematized by the time of its fabrication. Moreover, the gun has axial holes at the back that scholars have suggested served to affix it to a mount, allowing it to be elevated or lowered easily for aiming purposes. This, too, suggests that this gun was the product of considerable prior experimentation.

The Xanadu gun is the earliest dated gun, but undated finds may predate it. One famous candidate is a piece discovered in 1970 in the province of Heilongjiang, in northeastern China. Historians believe, based on contextual evidence, that it is from around 1288. One careful analysis argues persuasively that it was likely used by Yuan forces to quash a rebellion by a Mongol prince named Nayan. Like the Xanadu gun, it is small and light, three and a half kilograms, thirty-four centimeters, a bore of approximately two and a half centimeters.

Yet archaeologists in China have found evidence that may force us to move back the date of the first metal firearms. In 1980, a 108-kilogram bronze gun was discovered in a cellar in Gansu Province. There is no inscription, but contextual evidence suggests that it may be from the late Xi Xia period, from after 1214 but before the end of the Xi Xia in 1227 (Gansu was part of Xi Xia territory). What’s intriguing is that it was discovered with an iron ball and a tenth of a kilogram of gunpowder in it. The ball, about nine centimeters in diameter, is a bit smaller than the muzzle diameter of the gun (twelve centimeters), which indicates that it may have been a coviative rather than a true bullet-type projectile. In 1997, a bronze firearm of similar structure but much smaller size (just a kilogram and a half) was unearthed not far away, and the context of its discovery seems to suggest a similar date of origin. Both weapons seem more primitive than the Xanadu gun and other early Yuan guns, rougher in appearance, with uneven casting. Future archaeological discoveries will develop our understanding with greater certitude, but for now, it does seem possible that the earliest metal proto-guns were created in the late Xi Xia state, in the early 1200s.

Although historians debate the precise date of the gun’s origin, at present the disputes are in terms of decades. It seems likely that the gun was born during the 1200s and that the Mongols and their enemies aimed guns at each other. After defeating the Song dynasty in 1279 and founding the Yuan dynasty, the Mongols and their Chinese troops invaded Japan, Vietnam, Burma, and Java, wars that stimulated further innovation, although, alas, records are few and say little about gunpowder weapons.

Equally important, although the Yuan brought relative peace within the borders of the Middle Kingdom itself, it was not a lasting peace. As the Yuan dynasty dissolved during the early 1350s, guns played a central role in the bloody wars that followed. The most successful gunpowder lord was a poor monk named Zhu Yuanzhang, whose gunmen succeeded in establishing one of the most impressive dynasties in China’s history, the great Ming, which scholars now call the world’s first gunpowder empire.

Torpedo Development Before WWII

U.S. Navy Submarine and Aircraft Torpedos developed by the Newport Torpedo Station, Rhode Island, before and during World War II. The upper photograph shows the Mark XIV Mod. 3 type Submarine Steam Torpedo, and the externally identical Mark XXIII type. These torpedos were 20′ 6 long with a diameter of 21. The lower photograph shows the Mark XIII type Aircraft Torpedo, which was also used on Motor Torpedo (PT) Boats. This type of torpedo was 13′ 5 long with a diameter of 22′ 4. A shroud ring was later added around the tail fins of the Mark XIII torpedo. See Photo # NH 82842 for a photograph of a torpedo with this modification. U.S. Naval History and Heritage Command Photograph.


The torpedo now entered a stage of maturity, and concomitantly left behind the great named individuals who had imagined it, conceived it, and brought it to this maturity. From here on the developments would be driven by conflict, and the engineers were simply cogs in the machine. The notable names in this new stage would be those of the users, not the makers. There is the name of just one last designer to retain: on the eve of the Great War Lieutenant F H Sandford invented the pattern-runner. Its introduction, however, would have to wait until the following world war. Also, just before the outbreak of war, the production of British torpedoes was again moved, this time to the Royal Naval Torpedo Factory (RNTF) at Greenock, Scotland.

During the war great use was made of the popular 18in and the newer 21in torpedoes – the latter introduced in 1910 – the smaller sizes being mostly obsolete, and used only for ships’ boats in cutting-out expeditions, such as the attempts to destroy the stranded submarine E 15 in the Dardanelles. However, early RN and German submarines did retain the latest 14in models and, of course, they were the first torpedoes used successfully in drops from aircraft. The German navy introduced an interim calibre, the 50cm, which was 19.7in diameter.

The Allied blockade of Germany resulted in major non-ferric metal shortages, leading to the German occupying troops scavenging lead, brass and copper from houses in Belgium and northern France. This led them to introduce short-term expedients such as the use of cast iron instead of copper for the piping runs of their U-boat diesel engines: confiscated submarines and diesel engines in Allied hands after the Great War would cause their new owners significant problems. Despite this, the Germans concentrated on producing high-quality torpedoes, probably by taking shortcuts elsewhere, as they believed that the submarine torpedo was the decisive weapon that would help them win the war. One side effect, of course, was that they would not be producing Schwartzkopff torpedoes with phosphor-bronze bodies. Bronze would, however, continue to be used to produce U-boat torpedo tubes.

For a similar reason, having designed a stable warhead explosive in hexanite, a mixture of TNT and hexanitrodiphenylamine, the Germans continued with its production to the end of the war. This contrasted with the attitude of the British, who in 1917 were forced to dilute TNT with ammonium nitrate to produce amatol, a slightly inferior quality explosive, due to TNT being required elsewhere.

The fear of the surface torpedo was probably greater than the actual physical damage inflicted on the battle fleets. A far more dangerous development was the sinking of thousands of merchant ships by German U-boats, the majority by the torpedo.

On the Royal Navy side there were many submarine torpedo successes, but doubly galling in view of the relative scarcity of German ship targets, far too many torpedo failures. These were tracked down to inefficient exploders. Fisher became enraged, and declared he would have Assistant Director of Torpedoes Charlton ‘blown from a gun’. The reasons were the same as would reappear in the US Navy nearly thirty years later: a failure to expend expensive torpedoes in live-firing exercises involving the use of warheads against hard targets, as opposed to the standard practice of substituting a practice head. The exploders sometimes failed to go off, and it took a considerable time to find and eradicate the problem.

Thus it was infuriating for the crew of the only ‘K’– class steam submarine which ever drew a bead on a U-boat, to actually hit it with an 18in torpedo, but have it fail to explode. This class thereby failed to kill a single enemy, although accounting for a good number of RN deaths through accidents.

Again, the small, fast and highly manoeuvrable ‘R’ class, the first true hunter-killer submarines, might have made more of an impact, and promoted the future use of this new breed of submarine, had exactly the same thing not happened to one of them, seeing their torpedo hit a U-boat without exploding.

The Germans, for their part, experimented with a very large torpedo of 600mm diameter (23.6in) which they intended as the future armament of their last super-dreadnoughts and cruisers, plus the very large prototype destroyers. Very few were produced, and there is no record of their combat use. There was a proposal for an even larger torpedo, the 70cm (27.6in) J9. They did, however, make considerable advances, introducing remote control of exploding motorboats by radio, and of aerial torpedoes by wire, and they even introduced a magnetic influence exploder, which again would come to maturity late in the following world war.

In 1917 the Germans designed an electric torpedo, capable of a speed of 28 knots over 2000yds. Despite its slow speed, it had several advantages over the thermal-engined types. It was wakeless, giving escort vessels no indication of the location of the U-boat which had fired it. It did not change its mass, as did thermal torpedoes as their air and fuel were used up, so trim remained the same throughout its run. Finally, in an economy geared for war production, the electric torpedo did not require the same amount of highly-skilled man-hours in its construction as did the thermal type: it could be built by less specialised firms. Fortunately, the Armistice intervened before any electric torpedoes were fired in anger by the U-boat fleet.

The Americans produced a small experimental electric torpedo only 71/4in in diameter and some 6ft long, and followed it by a full-size 18in weapon in 1919. Then they lost interest in electric torpedoes for over twenty years. When the USA entered the war in 1917, large numbers of flush-decker destroyers were ordered, and to equip them the firm of Bliss-Leavitt produced over three thousand of their 21in Mark 8 ‘steam’ torpedoes, a production record at the time. These would remain in service with the flush-deckers through to 1945, and many would cross the Atlantic to join the Royal Navy when fifty of these veteran destroyers were delivered to Britain in 1940.


In the early 1920s the USN decided to withdraw the underwater tubes from its battleships, followed by the removal of their above-water tubes, deemed too dangerous in a big-gun battle. Above-water torpedo tubes remained standard fittings on all new cruiser designs, but except for the Omaha-class ships – which could be used in the role of destroyer flotilla leaders as in the IJN – the tubes were removed from all US cruisers in the mid 1930s. This move was not the result of a desire to reduce top-weight, as all the early Treaty cruisers came in under weight, but on the grounds that their tactical deployment did not require them, cruisers being reserved for gunfire support of destroyer flotillas.

Meanwhile, major moves were made in terms of the propulsion units. By the end of the Great War the standard British torpedo engine was a wet-heater four-cylinder radial made of bronze, with integral cylinder barrels and heads as in contemporary automobile practice. Because of the increasing weight of the air flask, required to withstand ever higher pressures, experiments were made with hydrogen peroxide, which needed a lighter containment vessel, to produce oxygen via a catalyst. These developments were shelved by the British, but taken up by the Germans, the Japanese and the Americans in the latter part of the Second World War.

To obtain more power from existing pressure vessels, thought was given to using air enriched with oxygen – up to 57 per cent by weight, or even pure oxygen. The British 21in Mark VII of 1928 was the Royal Navy’s first enriched air torpedo, carried by the London-class heavy cruisers, and this led to the 24.5in Mark 1 installed in the Nelson and Rodney. Before the outbreak of the Second World War, however, due to corrosion problems with the air vessels, both enriched air models were changed to run on normal compressed air.

At around the same time, the British were perfecting the burner-cycle reciprocating engine, which retained the classic four-cylinder radial layout. The bore/stroke ratio of these compact radial units bore little resemblance to the contemporary automobile long-stroke inline reciprocating engine, with its inherent disadvantages of piston friction and heavy, out of balance, reciprocating weights. The radial engine could thus continue to rival the American preference for the turbine engine. The first British torpedo to use the burner-cycle engine was the long-lived Mark VIII for submarine use. It was a modernised model of the Mark VIII which would be fired against the General Belgrano fifty-five years later. The corresponding torpedo for surface ships was the Mark IX.

The Brotherhood burner-cycle engine was fed with compressed air at around 840psi. A small amount of paraffin was atomised in the air and burned. The resultant gas was fed into the cylinders at a temperature of 1000°C, and additional fuel was injected just before the piston reached top dead-centre. The compression caused the fuel mixture to detonate, driving the piston down as in a diesel engine. The exhaust gas was evacuated through ports in the cylinder, but there were also two auxiliary exhaust ports in the piston crown. By 1945 this impressive power unit would have been tuned to produce up to 465bhp, sufficient to propel a 21in torpedo at 50 knots. Plans to run a version of this engine on nitric acid promised to produce 750bhp, but the outbreak of war meant that it was never built.

In 1923 German experiments with electric torpedoes continued in secret in Sweden, and the design was finalised six years later. Since the Versailles treaty banned Germany from possessing submarines, the electric torpedo was held in readiness until Hitler came to power and began repudiating the terms of Versailles.

Magnetic influence exploders had been developed during the Great War, and the Duplex exploder was fitted to Royal Navy torpedoes from 1938. But the old problem of insufficient live-firing tests cropped up, and was to seriously affect their performances. The only navy to carry out large-scale live torpedo firing was the Imperial Japanese navy, which had expended many obsolete warships in tests in the lead-up to the Second World War. The Japanese saw in the torpedo the weapon they needed to give them an edge over the numerically superior US fleet in the Pacific. Japanese war strategy involved wearing down the US Navy in a series of actions across the Pacific Ocean until parity had been reached with the Japanese battle line, when the dreadnoughts would move in for the decisive final battle. A major part of this strategy depended on the torpedo: they set to work to produce the best in the world, and in this they succeeded.

Japanese development of an electric torpedo for submarines began in 1921, inspired by the model the Germans had introduced to their U-boats just prior to the Armistice. The design was finalised by 1925. The 21in torpedo was powered by two 54-cell lead-acid batteries feeding a 95ehp motor. It could run at 28 to 30 knots out to 7000m (7660yds), carrying a 300kg (660lbs) warhead. It became the Type 92 in 1934, but manufacture was suspended, ready for mass production in the event of war.

The Imperial Japanese navy studied other German late war developments, including the 600cm 23.6in torpedo. They had previously tried out the 27.5in Fiume torpedo produced in around 1900, and in 1905 they had ordered 24in torpedoes from Fiume for coastal defence. Now they decided to produce a heavyweight torpedo of their own for the anticipated conflict with the Americans. The result was the 24in Year 8 torpedo of 1919, capable of 38 knots over 10,000m (11,000yds) and carrying a 345kg (759lbs) warhead. Ten years later the 24in Type 90 appeared, capable of 46 knots over 7000m (7660yds) with an explosive charge of 375kg (825lbs).

They had briefly tested oxygen-enriched torpedoes in 1917. Future Admiral Oyagi, during the two years (1926–27) he spent at the Whitehead factory in England, heard rumours that the Royal Navy was fitting oxygen-fuelled 24.5in torpedoes in Rodney and Nelson. In fact the 24.5in Mark 1 originally ran on oxygen-enriched air, but on his return to Japan, Oyagi headed up a team to work on producing a version of the Japanese 24in torpedo to run on 100 per cent oxygen.

There were severe problems to overcome. The oxygen had to be prevented from coming into contact with any of the lubricants in the torpedo mechanism, to avoid the risk of explosion. More serious were the explosions which occurred in the engine combustion chambers as soon as the oxygen and kerosene fuel were injected. The design team overcame this hazard by starting the torpedo on compressed air, stored in a ‘first air bottle’ and only then gradually changing over to pure oxygen. They succeeded in producing a working torpedo, which was designated the Type 93, from the year 2593 in the Japanese calendar when the design was finalised.

The Type 93 oxygen-fuelled engine produced 520 horsepower at 1200rpm, compared with the 240hp of the British 24.5in and the 320hp of the initial Mark VIII. It could run at 49 knots for 20,000m (22,000yds), an exceptional performance. At 36 knots it would reach out to a phenomenal 40,000m (44,000yds). It carried a 490kg (1078lbs) warhead, capable of inflicting devastating damage. And it was practically wakeless. To profit fully from its deadly characteristics the IJN introduced power-reloading gear to their large fleet destroyers, following the provision of multiple reloads on board their cruisers.

Since the Japanese had previously had difficulty in forging the air vessels required for their licence-built Whiteheads, they constructed a special 4000-ton press to forge the body and after end of the flasks for the Type 93 out of steel billets. The air flask forward end cover was fixed with a large copper washer, the internal pressure keeping the joint gas-tight, and the arrangement proved extremely satisfactory.

The Type 93 was too large to be carried in submarines, so a smaller 21in oxygen torpedo was designed for them in 1935, the Type 95. It could run at 49 knots for 9000m (9840yds) and at 45 knots it reached out to 12,000m (13,000yds). The Type 95 carried a 405kg (891lbs) warhead. In 1943 the Model 2 would carry a warhead of 550kg (1210lbs). The Type 95 first air bottles often leaked, and while in the tubes on cruisers and destroyers it was a simple matter to verify the pressure of their oxygen-fuelled torpedoes at regular intervals and, if necessary, top it up, in a submarine this was not so easy, a factor leading to the electric Type 92 being resuscitated.

In 1937 the Japanese designed their third oxygen-fuelled torpedo, this time an even smaller 18in model, the Type 97, designed for midget submarines. It carried a 350kg (770lbs) charge at 45 knots over 5500m (6000yds). The original Type 97 would see action only once, during the attack on Pearl Harbor, as its leaky first air bottles were impossible to check and recharge, since the torpedoes were muzzle-loaded into the tubes before the start of a mission, and the crew had no access to the torpedoes when under way.

The Japanese pre-war torpedo arsenal was completed by the excellent Type 91 aircraft torpedo. They experimented with highspeed torpedoes for destroyers, the two built reaching 56 knots, and also with turbine torpedo engines, but neither development was pursued. Up until 1940 they used round-nosed torpedo heads, but in that year the Italian streamlined torpedo head was introduced and used on all types, with a claimed increase in speed of around 2 knots, with no increase in engine power.

The US Navy produced three new torpedo designs in the 1930s, which would continue to serve throughout the Second World War. The Mark 14 submarine torpedo was designed in 1931, and was a development of the previous Bliss-Leavitt designs. The destroyer version was the Mark 15, which was longer with a larger warhead, but otherwise differed in minor details only.

The US Navy had begun experiments with alternative fuels as early as 1915, and in 1929 had started a research programme at the Naval Research Laboratory. By 1934 they had produced ‘navol’, a concentrated solution of hydrogen peroxide in water to provide an oxygen source, burning alcohol as fuel. The projected Mark 17 Navol torpedo for destroyers was interrupted by the attack on Pearl Harbor and the urgent need to produce torpedoes of the existing types.

The Atomic Bomb – WWII Axis and Soviet

Nazi Germany

In 1938 Otto Hahn, a German chemist, was the first to succeeded in breaking up (fissioning) a uranium atom into lighter, atomically speaking, elements by bombarding it with neutrons. At the time, the idea seemed preposterous, even to him, and he doubted his own test results, but after overcoming his reservations he published it in a scientific journal. It was soon realized that this process could yield enormous amounts of energy. Leo Szilard, a physicist of Hungarian descent, already considered this possibility while staying in London in 1933. Hahn’s discovery, however, clinched the argument for Szilard, now working in the United States, and he was horrified by the possibility that Nazi Germany could develop such a bomb. He then convinced Albert Einstein to approach President Roosevelt to initiate an American atomic research program.

The American scientists were sure from the start that they were in a race against Germany, and this feeling was intensified when in December 1942 they succeeded in producing the first sustainable nuclear chain reaction. The development of a bomb now became almost a certainty, but nobody knew where the Germans stood. Furthermore, even if the Germans were lagging behind, knowledge of American successes might urge them to intensify their efforts.

The detailed story of the development of the atomic bomb has already been told innumerable times, as has the story of Soviet spies and their successful attempts to uncover the Manhattan Project’s secrets. But the interesting question is how the Soviets became interested in what were the Americans doing and what the Germans and the Japanese knew of the American effort.

The short answer is that the Germans knew little, probably nothing, about the American work. After Hahn’s discovery, the Germans tried repeatedly to achieve a chain reaction in an experimental reactor but failed.

Immediately after the German surrender, all the German nuclear scientists, some of them Nobel laureates, were taken into British custody. They were moved to an English rural estate (Farm Hall) and interrogated. In the central room, where the captives were gathering for dinner, the British installed hidden microphones, and the full transcripts of the German conversations were later published (Bernstein and Cassidy 1995; Groves 1962, 333–35). Major Ritter, the senior British officer in Farm Hall, told Otto Hahn about the bombing of Hiroshima, and the stunned Hahn relayed the news to his colleagues.

That night the conversation among the German scientists focused on the question of whether the Americans had succeeded in producing a sufficient quantity of fissile uranium—or did they use plutonium? President Truman did not specifically mention uranium, and Werner Heisenberg, head of the German nuclear project, contended that it was impossible for it to actually be a nuclear bomb. In fact, he initially claimed that someone had duped the American government and that it was only an enormous conventional bomb. Later in the conversation, Hahn said to Heisenberg, “In any case you are a second rate scientist and you better acknowledge this fact,” and Heisenberg agreed.

Later, and in spite of the doubts, the conversation centered on the possibility that the Americans had indeed achieved that breakthrough and how they might have done it. British physicists who listened to these conversations came to the conclusion that the Germans lacked a lot of basic knowledge on the subject.

The Germans were totally surprised by the American announcement, and Heisenberg, the senior German scientist, did not believe that the Americans were capable of achieving the feat. This was pure conceit on his part—since the Germans did not consider the Americans their intellectual equals, whatever the Americans did, if at all, was not worth looking at. German scientific intelligence and analysis failed in this respect, in addition to its failure to collect information in many other fields (Kahn 1978). It is possible that this intellectual snobbery was partly to blame for German disregard of prewar information about British and American capabilities in the field, and they certainly did not seriously consider Soviet capabilities. The Germans made another basic mistake: they had not paid attention (if they noticed the fact at all) to the disappearance of American publications on nuclear topics. The Soviets and the Japanese did, but the Germans refused to see the writing on the wall.

The Soviet Effort

The first hint of something unusual in the field of nuclear physics was received in the Soviet Union in the beginning of 1942. Before the war, nuclear research in the Soviet Union consisted of theoretical work and modest laboratory experiments. In July 1940, two Soviet researchers, Georgii Flerov and Konstantin Petrzhak (both colleagues of the physicist Igor Kurchatov), published a paper in the American Physical Review about spontaneous (and rare) natural fission in uranium. An intentional fission by means of neutrons was already achieved by Otto Hahn in 1938, and the subject of uranium fission became a hot topic in the scientific community. In June 1941, Germany invaded the Soviet Union, and all nonessential research work (including nuclear research) was stopped. The research teams were assigned work in support of the fighting, and the younger researchers, including Flerov, were mobilized.

In February 1942, Lieutenant Flerov’s unit (an air force reconnaissance squadron) was stationed near the town of Voronezh, and on one free day Flerov went to the local university’s library (which was not evacuated eastward) and perused back issues of Physical Review, looking for comments on his July 1940 paper. (A prestigious publication often takes many months, even more than a year, to publish such comments.) He later related how surprised he was that even after a year and a half, there were none, which he thought was rather strange, but he also discovered that no American journal carried publications about nuclear physics (Holloway 1994, 78). He of course did not know, and could not guess, that the editors of the important scientific journals in the United States had decided (voluntarily) to halt such publication so that useful information would not leak to the Germans, who were considered to be leading the field (Richards 1994, 92). This decision was prompted, at least to a degree, by an incident in which James Chadwick (a 1935 Nobel laureate for the discovery of the neutron) was involved.

A paper about plutonium was published in the Physical Review in June 1940 by two Berkeley scientists. The journal was distributed to subscribers all over the world, including Germany. At that time, it was already understood that radar had military significance, and it was treated accordingly. Atomic bombs, or “super bombs” as they were called then, were solely in the domain of science fiction writers, and while research in the field was scientifically serious, its military implications were not yet clear. James Chadwick (from Britain, which had already been in the war for almost a year) got so agitated by that publication that he convinced the British embassy in the United States to formally complain. A senior official was sent to California and reproved Ernest Lawrence, a central figure in nuclear research in Berkeley (who himself received the Nobel Prize in physics, for the invention of the cyclotron—a particle accelerator—in 1929), for his impetuosity in disclosing secrets in such troubled times (Rhodes 1988, 350–51). The American editors took notice and stopped such publications, but the sudden drying up of such papers was noticed. (This is a good example where too much motivation caused failure.)

Following his discovery, Flerov came to the conclusion that nuclear physics had become a big secret in the United States. He was only partially correct. It was already a secret, though as yet not a big one. Flerov wrote to the State Defense Committee’s plenipotentiary for science, voicing his suspicions. (At that time, most Soviet nuclear physicists claimed that nuclear weapons or even nuclear energy were not a practical matter [Holloway 1994, 54].) When Flerov got no answer, he took a bold step. As a citizen of the Soviet Union, he had the right to appeal directly to Stalin, and in April 1942 he did so. In a long and well-reasoned letter, he explained what he thought the Americans were doing and concluded by saying that in his opinion the Soviet Union should not neglect this subject (Holloway 1994, 78). It is doubtful if Stalin himself took care of the subject (after all, he had to run a war), but in the fall of 1942 the senior nuclear scientists in the Soviet Union were summoned and nuclear research was revived, under the leadership of Kurchatov (Holloway 1994, 86; Richards 1994, 92).

Until 1941, Soviet scientists freely published papers about nuclear topics and other sciences. In this respect, they acted like scientists in most other countries. One of the reasons that the West was not aware of these publications was language difficulties. Soviet scientists knew English and German, but few in the West knew Russian. An expert in the field noted that the average citizen of the Soviet Union knew more about nuclear research in his homeland than the scientists of the Manhattan Project (Richards 1994, 92).

After the war, the Soviets continued with the policy of open publication of research, which they did not consider as contributing to military technology, and as previously mentioned, such a paper initiated stealth technology in the United States. But nuclear research was cloaked in total and successful secrecy. The first hint of a Soviet nuclear bomb was from a random collection of air samples, and came as a total surprise to the West. The Soviets managed to keep their secret. The dimensions of the Western intelligence failure were such that it was not believed possible, and an opinion was even voiced that the CIA knew about Soviet work but suppressed the information.

The Japanese Effort

The Japanese, too, noticed the disappearance of publications about nuclear physics in the United States. The head of the Japanese army’s research institute for aviation technologies followed in the years 1938–1939 the publications in this field and deduced, correctly, where things were leading. He then tasked one of his assistants to check for potential uranium sources within the borders of the Japanese empire, including future conquests. This man approached Yoshio Nishina, who had studied under Nils Bohr and was then a senior physicist in Tokyo. In 1940, Nishina gathered more than one hundred brilliant students and led initial work in nuclear physics. As part of this work, a large cyclotron was constructed, the plans of which were previously donated to Nishina by Ernest Lawrence.

The Japanese navy also became aware of the subject. In the spring of 1942, a naval committee recommended initiating research about nuclear power for navy ships. Another committee, a secret one, was convened to check the feasibility of nuclear weapons. This one tried to answer two questions: Are nuclear weapons possible at all? And if so, does Japan have the resources for such a project, and can such resources be allocated to it in the course of the present war?

The deliberations of the committee were no doubt influenced by the first setbacks Japan suffered in the war. In early May 1942, a Japanese thrust toward New Guinea was repulsed in the Battle of the Coral Sea. One month later, the Japanese navy suffered a resounding defeat in the Battle of Midway (so bad that the Japanese government tried to hide it from its public), losing four carriers to one American. In August, U.S. marines landed on Guadalcanal, and although the fighting still raged on, it was obvious that it was only a question of time before that strategic island, with its important airfield, would be lost. It appeared that the war might last longer than expected, with a commensurate drain on resources.

The conclusion of the committee was that a nuclear weapons project would last at least ten years and require half of Japan’s production of copper and one tenth of Japan’s electric power capacity. All agreed that such demands would stretch the Japanese economy beyond the breaking point. Consequently, in March 1943, the committee recommended that all nuclear research work be terminated and resources, manpower in particular, be transferred to other fields, especially radar. At that time, Japan already realized that it was way behind in this critical field.

The committee discussed another topic, and this is why the story of Japan’s atomic effort is broached here in a book about technological intelligence. The question was whether either Germany, the principal ally, or the United States, the principal enemy, had the capability to develop nuclear weapons. The disappearance of American publications on the subject was a glaring beacon and worried them all. But the committee reached the conclusion that both Germany and the United States did not have the scientific and industrial resources to get quick results in a project of this magnitude (Rhodes 1988, 458).

The committee was probably right about Germany, at least from the practical aspect. In time, German scientist would have probably overcome the theoretical problems (and mistakes) that hindered their work. But as we know now, theoretical work is not enough. As regards the United States, the picture was completely different.

Looking back, it appears that the members of the committee, erudite as they were in their fields of expertise, apparently did not understand the United States and did not have enough information about its potential resources. Most of them had probably never visited the country, did not appreciate its size, and were unfamiliar with its industrial and commercial culture. Admiral Isoroku Yamamoto, the commander of the Combined Fleet and the architect of the attack on Pearl Harbor, understood the United States better. When Japan seemed to be sliding into war, the Japanese prime minister asked Yamamoto for his opinion about the chances of victory in a war with Great Britain and the United States. His answer was, “I can raise havoc with them for one year or at most eighteen months. After that I can give no one any guarantees” (Potter 1967, 56). Later, talking to the navy admirals, he modified his assessment to “six months to a year of war” and added that if the war was prolonged to two or three years, he had no confidence in Japan’s ultimate victory (Potter 1967, 58). As things turned out, he was prophetically accurate in his timetables. But few realized that Yamamoto was fluent in English, was once a student in Harvard (1919–1921), and had served as Japan’s naval attaché in Washington (1926–1928). He also meticulously followed American exercises of attacks against the Panama Canal and carrier-launched attacks against Pearl Harbor and was very much impressed (Lowry and Wellham 2000, 17).

Even if that committee had reached the conclusion that the United States was capable of developing nuclear weapons, it would not have helped them. On the one hand, they could not mortgage so much of their resources for this project. On the other hand, after December 7, the American public would not have accepted anything less than a total surrender, and the Japanese could never agree to this. Even after the second atomic bomb was dropped on Nagasaki, a large group of Japanese officers wanted to keep on fighting and was only a short step from an open rebellion against the emperor (Pacific War Research Society 1983, 58, 129, 149).

The Japanese made another mistake, which originated from misunderstanding the American state of mind. The underlying reason for Japan’s aggression was the need for raw materials, and in Southeast Asia these were mostly under British and Dutch control, with some in French hands. In the mid-thirties, a Japanese naval officer published a book in which he presented a well-reasoned (from a Japanese point of view) theory on why Japan must fight Britain. The United States was barely mentioned in the book, and the author stated that diplomatic efforts should be made to prevent it from joining the fight on the side of Britain (Ishimaru 1936, 191–93). Except for the abstract question of “control” of the Pacific Ocean, there really were no friction points between the United States and Japan, except for the American public’s revulsion at Japanese atrocities in China, which hardly constituted a casus belli. What would have happened if Japan attacked only Britain and Holland? (France was governed by the Vichy regime, which collaborated with the Germans, and the Japanese had in effect a free hand in French Indochina.) The Japanese assumed that the United States, an English-speaking Western society, would rush to help Britain and Holland. They also worried about the U.S. Navy, because of that question of “control,” but on the other hand failed to grasp the intensity of isolationist sentiment in the United States, which would have prevented it from initiating a war against Japan. Admittedly, this is a “what if” type of speculation, but it is also an excellent support to the argument that in order to conduct an efficient strategy against an enemy of another culture, it is imperative to understand that social and cultural intelligence is also needed, and not only operational and technological intelligence.

In contrast to the Americans, the Japanese scientific and research activities in the nuclear field were disorganized. In Japan, there was no coordination or collaboration in research between the various military services and the civilian sector, and there was no central guiding hand for the various research activities (Grunden 2005, 79). After the navy committee concluded that Japan did not have the resources to enter into development of nuclear weapons, rumors reached the army that both Germany and the United States were working on nuclear weapons. So with the ink hardly dry on the navy’s conclusions, the prime minister (and minister of the army) called for an acceleration of nuclear research efforts (Grunden 2005, 69). But the Japanese scientists ran into nearly every conceivable technical problem, and the project was finally dealt the coup de grâce when, on April 13, 1945 (a Friday), a bomb from a B-29 destroyed their laboratory complex and ended the Japanese nuclear project (Grunden 2005, 78; Rhodes 1988, 612).

The great torpedo scandal, 1941-43

Mark 14 torpedo’s side view and interior mechanisms, published in “Torpedoes Mark 14 and 23 Types, OP 635”, March 24, 1945

A torpedo may take a long time before it settles on its final course. If the torpedo direction is still changing when the torpedo arms, it may set off the magnetic influence exploder.

by Frederick J Milford

Naval rearmament, which began in the mid-1930s, and WW II had dramatic impact on US torpedo programs. Three of the most significant changes were the enormously increased requirement for torpedoes, the urgent need for new torpedo types and the first use of US torpedoes against enemy vessels. The increased requirement was satisfied by expanding government facilities, the Newport Torpedo Station (NTS-Newport) was enlarged, the Alexandria Torpedo Station was reopened 1 and Keyport Torpedo Station began assembling torpedoes, and by initiating civilian production. Total production between 1939 and 1945, almost 60,000 torpedoes, was about equally divided between the torpedo stations and contractors. Mk.14 torpedoes were, however, in such short supply in 1942 that some fleet boats loaded out with Mk.10 torpedoes or even Mk.15s in the after tubes 2. New types of torpedoes are discussed in Part Three of this series. Firing warshots was an almost totally new experience for the US Navy. It seems probable that the number of warshots fired against enemy vessels in December 1941 was larger than the total number of warshot torpedoes fired for any purpose 3 in the entire past history of the US Navy. Perhaps not surprisingly, this intensive use of torpedoes revealed shortcomings that had been previously obscured, especially in the new service torpedoes and particularly in the Mk.14.

The trio of new service torpedoes, Mk.13, Mk.14 and Mk.15, which represented the bulk of the US Navy torpedo development in the 1930’s were on the one hand excellent weapons and had long service lives, Mk.13 remained in service until 1950, the Mk.14 was a valuable service weapon until 1980 and Mk.15 served as long as twenty-one inch torpedoes remained on destroyers. On the other hand they all had significant problems that were only fixed after wartime use began. The Mk.14, which was the principal submarine weapon, was plagued with defects that vitiated its use as a weapon until mid-1943. The conflict between the shore establishment and the operating forces over these problems was a very significant and much discussed factor in US submarine operations during WW II.


The Great Torpedo Scandal 4 emerged and peaked between December 1941 and August 1943, but some of its roots went back twenty-five years. It involved primarily the Mk.14 5 and three distinct problems, depth control, the magnetic influence exploder 6 and the contact exploder, whose effects collectively eroded the performance of the torpedoes. The scandal was not that there were problems in what was then a relatively new weapon, but rather the refusal by the ordnance establishment to verify the problems quickly and make appropriate alterations. The fact that after twenty-five years of service the Mk.10 had newly discovered depth control problems adds weight to the characterization of the collection of problems and responses as a scandal. These comments should, however, be mitigated a little by the fact that each of the Mk.14 problems obscured the next. Although BuOrd did not identify the final problem, contact exploder malfunction when a torpedo running at high speed struck the target at ninety degrees, their response, once the difficulty had been identified, was notably prompt. In spite of the promptness of BuOrd’s response, by the time it reached Pearl Harbor a number of relatively simple solutions to the problem had been proposed, and modifications had already been designed and implemented. This was, however, almost two years after the United States entered WW II.

Torpedo Depth Control

The first of the US torpedo problems was deep running, which was a frequent torpedo problem in various navies beginning at least as early as WW I. The problem was not, however, always due to the same sort of defect 7. There are at least four distinct kinds of problems that impact depth control:

1) Differences between calibration shots and service/warshots

    a) Torpedo weight or balance changed in converting to warshots, for example, warheads that were heavier than calibration heads.

    b) Calibration firings failed to simulate service launch conditions, for example, calibration firings from barges or surface vessels rather than submerged torpedo tubes, and/or calibration shot launch speeds, i.e., the speed at which the torpedo leaves the tube, and accelerations during launch different from service conditions.

2) Design or manufacturing defects causing changes in calibration after proofing or effectively causing calibration to change with time or environment, for example, sensing water pressure where flow corrections were large, or depth spring fatigue, or leaky castings etc.

3) Erroneous calibration: failure to check against an absolute standard, for example, total reliance on hydrostatic depth measurement and failure to use nets, soft targets or other sensing systems to establish true depth.

4) Inadequate understanding of the technology involved, for example, failure to recognize the importance of hydrodynamic flow in sensing the pressure at the skin of a fast torpedo; lack of understanding of the feedback loop and depth control dynamics 8.

Amazingly, US torpedoes, especially the Mk.14, demonstrated that most of these possibilities could, in fact, occur.

Depth control problems with US torpedoes were suspected by the Newport Torpedo Station (NTS-Newport) and BuOrd even before the United States entered WW II. On 5 January 1942 BuOrd, based on earlier (1941) testing, advised that the Mk.10 torpedo, which had entered service in 1915 and was still used in S-class submarines, ran four feet deeper than set 9. NTS-Newport tests on the Mk.14 torpedo in October 1941 had been interpreted as indicated that it too ran four feet deeper than set, but this was not reported to the submarine commands at that time. War patrol experience led to fleet suspicions that the torpedoes ran deep and these thoughts were communicated to BuOrd. In response to a direct order from the Chief of the Bureau of Ordnance, additional NTS- Newport tests in February-March 1942 “confirmed” the four-foot error for the Mk.14. RAdm William H. Blandy, Chief of BuOrd, notified RAdm Thomas Withers, Jr., ComSubPac, of the problem in a letter dated 30 March 1942, but general notification to the submarine forces was not made until BuOrd issued BuOrd Circular Letter T-174 dated 29 April 1942. The language in correspondence between Withers and Blandy indicate that Newport and BuOrd believed that the four-foot error in Mk.14 depth was due to calibrating torpedoes with test heads that were lighter than the warhead. This would cause torpedoes with warheads to run deep both because of increased weight and a nose heavy trim. The Mk.14 depth control problem was, however, much more severe than the four feet acknowledged by NTS-Newport.

In a mood of desperation, the operating forces made their own running depth determinations, using fishnets for depth measurement, at Frenchman’s Bay in Australia on 20 June 1942. These measurements indicated that the depth errors were probably more like eleven feet 10. BuOrd and NTS-Newport criticized the methodology and were reluctant to accept the results of the

Frenchman’s Bay firings and it was not until August of 1942, after intervention by the CNO, Admiral Ernest J. King, that they re-investigated and agreed that there was a ten-foot depth error in the Mk.14 system. Interim instructions for fixing the problem were issued very quickly and kits to effect an official alteration were distributed in late 1942. As near as we have been able to determine, there were two independent problems: Trim change due to warheads heavier than calibration heads and sensing the water pressure at a point where the velocity head was significant and consequently the measured pressure was low. The fix for the latter moved the pressure sensing port to the interior of the free-flooding midbody where the pressure was close to the true hydrostatic pressure and so reflected the true depth. The modified torpedoes were identified by the suffix A added to the Mod. with the most famous being Mk.14 Mod. 3A

Since the hydrodynamic problem has seldom been explained in readily accessible documents, we give a brief summary here. The pressure along the length of a torpedo varies because the velocity of the water relative to the surface varies. The pressure at the nose is higher than the hydrostatic pressure, which is proportional to depth, by an amount proportional to the square of the torpedoes speed. This corresponds to a depth of 39 feet of seawater for a torpedo moving at 30 knots or 88 ft for a 45 knot speed. As the measuring point is moved back along the skin of the torpedo the pressure decreases rapidly and becomes substantially less than the hydrostatic pressure. The pressure subsequently rises but remains slightly less than the hydrostatic pressure along most of the cylindrical section. Finally along the conical afterbody the pressure again drops and then rises though, since the actual flow is not streamline, not to the values found at the nose. The critical point is that the pressure at the skin of a torpedo is generally different from the hydrostatic pressure corresponding to the torpedo’s depth. The deviation is substantial in the nose and tail cone regions. A depth error due to the measurement of the wrong pressure would, of course, be detected in any calibration process that used an absolute depth measurement for reference. Unfortunately the Torpedo Station used a depth and roll recorder which determined depth by measuring the water pressure and was thus subject to the same kind of error as the depth gear. Furthermore, the depth and roll recorder was placed in the test head at a point where the hydrodynamic pressure was less than the hydrostatic pressure by almost the same amount as at the location, in the afterbody, of the sensing port for the depth gear. Thus both the recorder and the depth gear sensed essentially the same pressure, though not the hydrostatic pressure, and the torpedo appeared to be running at the set depth. The depth engine, however, responded to the lower pressure by adjusting the horizontal rudders to correct this “error” and the torpedo ran deep. The hydrodynamic theory needed to understand this problem was readily available in the 1930s but most design engineers were quite probably not acquainted with it. In consequence, it was assumed that since the depth recorder showed the correct depth, the torpedo was running at the correct depth. There are other insidious aspects to this problem. One of these is that a depth recorder checked against depth by static immersion in water to various depths or in a pressurized tank of water reads correctly since the error described above is due to hydrodynamic flow. Further the error is proportional to the square of the torpedo speed and is thus almost twice as important for a 46 knot torpedo as it is for a 33 knot torpedo. None of these comments, however, justify or excuse the failure to use an absolute standard to verify the results obtained with the depth and roll recorder or the obdurate resistance to complaints from the operating forces.

The operational aspects of the depth control problem have been recounted many times 11. The Mk.10 problem, which was probably dominated by the error caused by the change from exercise heads to warheads, was handled by simply setting the torpedo to run at a shallower depth and this procedure was implemented in January 1942, over twenty-five years after the weapon entered service. The Mk.14 problem required both a calibration modification and a modification to sense water pressure in the midships section and the latter was implemented beginning in the last half of 1943.

The Magnetic Influence Exploder

The second problem with the Mk.14 torpedo was the erratic performance of the magnetic influence feature of the Mk.6 exploder. Magnetic influence exploders had great appeal as proximity fuses for torpedoes offering the possibility of detonating the warheads under the vulnerable bottoms of warships. This potential advantage led most of the major navies to attempt to develop such exploders and generally these first attempts were not successful in service use.

The basic idea of a magnetic influence exploder is to sense either the field due to permanent magnetization of a ships hull or the perturbation of the Earth’s magnetic field caused by the large quantity of relatively high permeability ferrous metal in the ships structure. This is a sound and workable idea, but early simple attempts did not take adequate account of the nature of the perturbation. The Mk.6 device in particular relied on the variation of the horizontal component of the magnetic field as the torpedo approached the target. This field variation induced a voltage in a sensing coil. The voltage triggered a thyratron which discharged a capacitor through a solenoid. The solenoid, in turn, operated a lever that displaced the inertia ring thus triggering the mechanical exploder. This complex arrangement was presumably designed so that an exploder, Mk.5, without the magnetic influence portion, but otherwise identical to the Mk.6 exploder could be produced and issued to the fleet in peacetime. Security was apparently the overall motivation for this convoluted approach.

The perturbation of the Earth’s field by a ship naturally depends on the inclination of the Earth’s field to the horizontal. This inclination varies from zero at the magnetic equator to ninety degrees at the magnetic poles. At NTS Newport it is about sixty degrees. Regardless of the inclination of the Earth’s field, a ship, because of the ferrous metal in its structure, causes both horizontal and vertical perturbations of the Earth’s field which vary with distance and direction from the ship. The closer the Earth’s field is to vertical the greater the rate of change of the horizontal perturbation field with distance and the closer to a point directly below the keel the maximum rate of change occurs. Thus a device that senses the rate of change of the horizontal component of the perturbed field works best where the Earth’s magnetic field has a large vertical component. Unfortunately, a device that works well at high magnetic latitudes may not work at all well where the Earth’s field is nearly horizontal. Thus, the performance of a simple magnetic influence exploder is significantly dependent on the latitude at which it is operated.

Exactly this problem affected the magnetic exploders developed by the Royal Navy, the German navy and the US Navy. The Royal Navy quickly abandoned magnetic influence devices and relied on contact exploders. The German navy provided a sensitivity adjustment that would, in principle, compensate for changes in latitude. This was unsatisfactory and it too was abandoned fairly quickly 12. The BuOrd/Naval Torpedo Station Newport response was first denial that there was a problem, then a complicated set of instructions for setting the exploders for different latitudes.

The magnetic influence exploder was unquestionably responsible for sinking some, perhaps even a large fraction, of the 1.4 million gross registry tons of Japanese merchant ships sunk by submarines between December 1941 and August 1943. Reports from submarine commanding officers of apparent magnetic influence exploder failure, mainly duds and prematures, finally led to CinCPac ordering the disabling of the magnetic influence feature on 24 June 1943. ComSubSoWesPac reluctantly followed suit in December 1943 13. CinCPac’s order was issued eighteen months after Jacobs, on Sargo’s first war patrol, ordered the deactivation of the magnetic influence portion of the Mk.6 exploders in his torpedoes and incidentally got into considerable difficulty for doing so. Magnetic influence exploders were not used by US Navy submarines through the balance of WW II.

The Impact Exploder

Once the depth problem had been fixed and the magnetic influence feature of the Mk.6 exploder deactivated, it came the turn of the impact exploder to demonstrate its merit. Unfortunately the initial result was a plethora of duds, solid hits on targets without warhead detonations 14. This problem was suspected earlier, but it was not until the other two problems had been eliminated that there was unequivocal evidence of a problem with the impact exploder. This difficulty was a further frustration for the operating forces, but fortunately it was quickly diagnosed. The key to the problem was again the increased speed of Mk.14 15. The impact portion of the Mk.6 exploder was exactly the same as that which had been used in the Mk.4 and Mk.5 exploders. The Mk.4 worked entirely satisfactorily in the 33.5 knot Mk.13 torpedo. What was overlooked was that in going from 33.5 knots to 46.3 knots the inertial forces involved in striking the target at normal incidence were almost doubled. These greatly increased inertial forces were sufficient to bend the vertical pins that guided the firing pin block. The displacement was sometimes enough to cause the firing pins to miss the percussion caps, resulting in a dud. In cases of oblique hits, the forces were smaller and the impact exploder more often operated properly. Several war patrols, especially those cited above, convinced ComSubPac, VAdm Charles Lockwood, that there was a problem and he again resorted to experiment. Firings at a cliff in Hawaii demonstrated that some torpedoes did not detonate when they hit the cliff. A rather risky disassembly of a dud revealed the distortion of the guide pins. It was a simple solution to make aluminum alloy (rather than steel) firing pin blocks and lighten them as much as possible thus reducing the inertial forces to a level that did not distort the guide pins. Another solution was to use an electrical detonator and a ball switch to fire the warhead. This too was relatively easy to implement and soon became standard.

Once these and other less significant problems were solved, the Mk.14 torpedo became a reliable and important weapon. After WW II, it was modified to accommodate electrical fire control settings, gyro angle, depth and speed, and as Mk.14 Mod.5 remained in service until 1980.


It is worth asking how these three problems might have come about and presented such a refractory situation early in WW II. It is easy to identify several contributing factors, but it is unlikely that any one of them alone was the deciding factor. One of the first factors was the economy. These torpedoes were developed during the great depression, the total US Navy budget from 1923 through 1934 averaged less than 350 million dollars per year and total personnel stood at about 110,000. In that environment a torpedo was valued at around $10,000 (about the same as a fighter aircraft airframe complete except for engine) and destroying one in testing was a risk that only the fearless were willing to run. The result was that testing and proofing were done in such a way as to avoid risk of damage either to expensive torpedoes or scarce targets. As is often the case, constrained testing failed to reveal certain critical problems. It is, however, difficult not to believe that deep running, in particular, should have been discovered. There were well documented reports of German and British problems during WW I. It appears also that impact exploders were not tested in high speed torpedoes or at least not tested in impacts of well simulated warheads with hard targets. Such tests were undoubtedly omitted in an effort to avoid destroying useful materiel, exploders in particular, and perhaps further justified by the fact that the exploder performed satisfactorily in lower speed tests and by its primary role as a back up to the magnetic influence exploder. Thus we conclude that with respect to these two problems, depth control and the impact exploder, the poor state of navy finances and the concomitant lack of realistic testing probably played a significant role.

Another aspect of the situation was the almost total isolation of NTS Newport from the larger US technical and engineering community especially after 1923 when the station secured a monopoly on torpedo development and production. Political and labor interests in keeping jobs in New England probably encouraged the isolation. The net result seems to have been a lack of expansion of the scientific basis for torpedo technology at Newport at a time when dramatic changes in engineering were taking place elsewhere. No one was thinking about torpedoes from different perspectives and asking hard questions about design details. The isolation was exacerbated, especially in the case of the Mark 6 exploder, by draconian security, which in some cases even excluded the operating forces from full knowledge of the weapons they were expected to use. In this isolated environment, NTS-Newport developed an arrogant `We are the torpedo experts.’ attitude and when problems began to arise, the response was denial–`there is nothing wrong with the torpedoes’- -with the result that problems were identified and fixed slowly.

Perhaps not surprisingly a very strong polarization developed between the operating forces and the torpedo shore establishment. The operating forces resented their exclusion from the torpedo development cycle and flaunted their successes in proving that there were problems with the Mk.14 torpedo. These strongly expressed opinions of the men of the operating forces did not tend to improve relations with NTS-Newport. The operating forces also tended to exaggerate their contributions to the solution of the problems and deprecate those of NTS-Newport. A distinguished and truly great submariner recently wrote: _So by the beginning of September 1943, the operating submariners had detected and solved three serious defects in the Mark XIV torpedo: its faulty depth setting, skittish magnetic exploder and sluggish firing pin. All three problems had been solved by the operating forces in their tenders and bases, without help from Newport or Washington. 16 This is certainly an overstatement, but what is most significant is that though written over fifty years after the events, it still reflects the intense polarization that existed between the operating forces and the torpedo shore establishment.

This spectrum of problems was not unique to the US torpedo establishment. Almost the same set, defective depth control, unsatisfactory and untested magnetic exploder and a contact exploder that did not work at certain striking angles, occurred in the German navy and many of the responses of the shore establishment to the problems were also the same. The situation is discussed in considerable detail by Doenitz in his memoirs 17. The German navy’s problems were closed out, however, with four senior officers being tried by court martial, on the orders of Grand Admiral Eric Raeder, found guilty and punished.

Lest there be any implication that the entire US Navy or even all of BuOrd was functioning in isolation, we note that at about the same time early experiments with what became radar were being conducted at the Naval Research Laboratory (only about 350 miles Southwest of Newport). In 1937 complete disclosure of the state of radar development was made to the Army Signal Corps and Bell Telephone Laboratories. Radio Corporation of America was brought into the fold in 1938. 18 The contrast of this approach to the Newport approach is nothing if not striking. BuOrd itself in the development of range keepers for surface fire control, in a comparably secret endeavor roughly contemporaneous with the Mk.14 development, co-opted Ford Instrument, ARMA and Sperry to assist with the development. A later dramatically contrasting development program was the development of the Mk.24 Mine (Torpedo) between December 1941 and May 1943, which is discussed in a subsequent part of this series.

This takes the story of U.S.Navy torpedoes through beginning of WW II. As the United States became involved in the war, it became apparent that new kinds of torpedoes would be useful and a multitude of programs to develop improved weapons for submarines, surface vessels and aircraft were initiated. The idea that torpedoes could be significant ASW weapons also evolved and was elaborated with considerable success. The wartime developments and the post war development of US Navy torpedoes are discussed in the third part of this series.


1 The Newport monopoly on the torpedo business had a significant effect on the development of torpedoes. The extent of the monopoly and efforts to preserve it are illustrated by opposition to the reopening of Alexandria, which was accomplished in the face of demands from New England politicians and labor leaders that Newport be expanded. Resuming torpedo work at Alexandria expeditiously was possible only because when it was closed in 1923 it had been incorporated into the Washington Navy Yard. Consequently, the torpedo station could be reopened without an Act of Congress.

2 This was mentioned by Adm. B.A. Clarey in a recent interview with John DeVirgilio and confirmed by RAdm M.H. Rindskopf who also supplied key parts of the following material. Mk.15 torpedoes were too long to be loaded through hatches or stowed in the torpedo rooms. They were also too long for either the forward or longer aft torpedo tubes. They were modified, probably by using shorter warheads, and loaded into the aft tubes through the muzzle doors. USS Drum, SS-228, sailed so loaded on her second war patrol from Pearl Harbor in July 1942. All four Mk.15s were fired.

3 This, of course, means self-propelled torpedoes and excludes spar and towed devices. Apparently, only eleven torpedoes fired by US forces against enemy vessels prior to WW II (AL boats against U-boats). The number of warheads used in training and test and evaluation was very small. US submarines made 54 war patrols in December 1941 and fired 66 torpedoes at enemy targets, quite possibly more warheads than had been fired in the entire previous history of the US Navy.

4 At least three MA theses have been written about the problems of the Mk.14 torpedo (Ingram (1978), Shireman (1991) and Hoerl (1991)); the problem was noted by Morison and is discussed at length in Theodore Roscoe, , _United States Submarine Operations in World War II_, Annapolis: Naval Institute Press, 1949; Clay Blair, Jr., “Silent Victory: The U.S.Submarine War Against Japan”, Philadelphia and New York: J.B.Lippincott, 1975; and Edwyn Gray, “The Devil’s Device: Robert Whitehead and the History of the Torpedo” (Revised Edition), Annapolis, USNI Press, 1991.. David E. Cohen has written a paper on the subject, “The Mk.XIV Torpedo: Lessons for Today”, Naval History, Vol.6, No.4, Winter 1992, pp.34-36″

5 Criticism of the destroyer launched Mk.15 is almost nonexistent. This is strange because the principal differences between the Mk.14 and the Mk.15 were in the size of the warhead, the fuel load, three speed vice two speed and slightly slower high speed, 45.0 k vice 46.3 k. One might speculate that it is even more difficult to distinguish misses from duds in a high-speed destroyer attack than it is in a more measured submarine attack. The Mk.15 did, in fact suffer from the same defects and they were rectified in essentially the same way that those of the Mk.14 were. The Mk.13 was a slower speed torpedo so it did not have the contact exploder problem and it used the Mk.4 exploder which did not have the magnetic influence feature.

6 Properly, the exploder is the entire Mk.6 assembly. It has an influence feature and a contact feature. This leads to awkward verbiage so we refer to the magnetic influence exploder and the contact exploder. Both are parts of the Exploder Mk.6, which weighs approximately 90 lbs, and some elements of the exploder function in both modes. The exploder also contains important safety features.

7 Some indication of the bewildering set of problems experienced by other navies can be found in Cdr. Richard Compton-Hall, RN (Ret) “Submarines and the War at Sea”, 1914-1918″, London: Macmillan, 1992; Karl Doenitz, “Memoirs: Ten Years and Twenty Days”, Annapolis: U.S.Naval Institute Press, 1990. Cajus Bekker, (pseudonym for H.D. Berenbrok). “Hitler’s Naval War” Garden City, NY: Doubleday, 1974.

8 The Summary Technical Report of Division 6 of NDRC, _Torpedo Studies_ Vol.21, Washington: NDRC, 1946, p.15, contains the following revealing comment: “The principal result of the study of Depth-keeping is the development of a theory … there is no longer any excuse for the laborious production of depth mechanism that cannot be expected to operate at all.”

9 Roscoe p.253

10 More detail can be found in any of the references cited above. Blair discusses the situation on pp 275 ff. It is not clear whether or not the eleven-foot error included the error due to changing from exercise heads to warheads. It is, however, interesting that BuOrd/NTS-Newport criticized the Frenchman’s Bay experiments on the basis of “improper torpedo trim conditions” (Quoted in Blair p.276).

11 Roscoe p.253; Morison Vol.IV p.221 in particular; Blair pp.169-70, 198; John David Hoerl, _Torpedoes and the Gun Club_, unpublished MA Thesis, VPI and State University, 1991, pp. 9-15

12 Successful magnetic exploders have, of course, subsequently been developed by many organizations.

13 ComSubSoWesPac (Christie) issued the deactivation order in response to an order he had received from the new Commander, Seventh Fleet (Kincaid). Blair “Silent Victory”, p.504. Christie had been heavily involved in the development of the Mk.6 exploder at Newport and was reluctant to see it abandoned.

14 Two of the best documented patrols that suffered duds were Wahoo 5 (April 1943) and Tinosa 2 (July 1943). The first of these is reported in O’Kane “Wahoo” and the second in Shireman “The Sixteenth Torpedo” unpublished MA thesis, U. of Wisconsin, 1991.

15 The literature on the Mk.13, Mk.14 and Mk.15 torpedoes focuses strongly on the Mk.14 and says almost nothing about either the Mk.13 or the Mk.15. This is understandable in the case of the Mk.13 since it was a slower torpedo and consequently had a smaller depth error and no major problem with the contact exploder. In the case of the destroyer launched Mk.15, which was a few feet longer than the Mk.14 and carried a larger warhead, but otherwise nearly identical to the Mk.14, I have found no references to unequivocal torpedo failures. This may be because during a destroyer torpedo attack things are too hectic to permit a careful evaluation of torpedo performance.

16 James F. Calvert _Silent Running: My Years on a World War II Attack Submarine_, New York: John Wiley, 1995 pp.96-97.

17 Karl Doenitz “Memoirs: Ten Years and Twenty Days”, Annapolis: USN Press, 1990. The bulk of the discussion of torpedo failures is contained in Chapter 7 and Appendix 3.

18 L.S.Howeth “History of Communications-Electronics in the United States Navy”, Washington: GPO, 1963 Chapter XXXVIII, and chronology pp. 540-41.

Strategic Defense Initiative [SDI] I

President Ronald Reagan’s announcement of the Strategic Defense Initiative on 23 March 1983 marked an even more explicit bid to use US technology to compete with the Soviet Union. As he put it:

Let us turn to the very strengths in technology that spawned our great industrial base and that have given us the quality of life we enjoy today.

What if free people could live secure in the knowledge that their security did not rest upon the threat of instant US retaliation to deter a Soviet attack, that we could intercept and destroy strategic ballistic missiles before they reached our own soil or that of our allies?

I call upon the scientific community in our country, those who gave us nuclear weapons, to turn their great talents now to the cause of mankind and world peace, to give us the means of rendering these nuclear weapons impotent and obsolete.

The US National Intelligence Council assessed that the Soviet Union would encounter difficulties in developing and deploying countermeasures to SDI (Strategic Defense Initiative). As one September 1983 memorandum put it,

They are likely to encounter technical and manufacturing problems in developing and deploying more advanced systems. If they attempted to deploy new advanced systems not presently planned, while continuing their overall planned force modernization, significant additional levels of spending would be required. This would place substantial additional pressures on the Soviet economy and confront the leadership with difficult policy choices.

SDI was announced in March 1983 by President Ronald Reagan as a plan for a system to defend against nuclear weapons delivered by ICBM (INTERCONTINENTAL BALLISTIC MISSILE). As planned, SDI would constitute an array of space-based vehicles that would destroy incoming missiles in the suborbital phase of attack.

The plan was controversial on three broad fronts. First, the Soviet Union, at the time the world’s other great nuclear superpower, saw SDI as a violation of the 1972 SALT I Treaty on the Limitation of Anti-Ballistic Missile Systems and there- fore an upset to the balance of power. Second, proponents of the policy of mutually assured destruction (“MAD”), who saw the policy as the chief deterrent to nuclear war, criticized SDI as a means of making nuclear war appear as a viable strategic alternative. Third, a great many scientists and others believed SDI was far too complex and expensive to work. These critics dubbed the “futuristic” program “Star Wars,” after the popular science fiction movie, and the label was widely adopted by the media.

Indeed, the technical problems involved in SDI were daunting. Multiple incoming missiles, which could be equipped with a variety of decoy devices, had to be detected and intercepted in space. Even those friendly to the project likened this to “shooting a bullet with a bullet.” Congress, unpersuaded, refused to grant funding for the full SDI program, although modified and spin-off programs consumed billions of dollars in development.

The `Star Wars’ programme or Strategic Defense Initiative (SDI), outlined by Reagan in a speech on 23 March 1983, was designed to enable the USA to dominate space, using space-mounted weapons to destroy Soviet satellites and missiles. It was not clear that the technology would work, in part because of the possible Soviet use of devices and techniques to confuse interceptor missiles. Indeed, Gorbachev was to support the Soviet army in claiming that the SDI could be countered. 12 However, the programme was also a product of the financial, technological and economic capabilities of the USA, and thus highlighted the contrast in each respect with the Soviet Union. The Soviets were not capable of matching the American effort, in part because they proved far less successful in developing electronics and computing and in applying them in challenging environments. Effective in heavy industry, although the many tanks produced had pretty crude driving mechanisms by Western standards, the Soviet Union failed to match such advances in electronics. Moreover, the shift in weaponry from traditional engineering to electronics, alongside the development of control systems dependent on the latter, saw a clear correlation between technology, industrial capacity, and military capability. It was in the 1980s that the Soviet Union fell behind notably. In 1986, an American interceptor rocket fired from Guam hit a mock missile warhead dead-on. This test encouraged the Soviets to negotiate.

The collapse of the Soviet Union beginning in 1989 seemed to many to render SDI a moot point-although others pointed out that a Russian arsenal still existed and that other nations had or were developing missiles of intercontinental range. There were, during the early 1990s, accusations and admissions that favorable results of some SDI tests had been faked, and former secretary of defense Caspar Weinberger asserted that while the SDI program had failed to produce practical weapons and had cost a fortune, its very existence forced the Soviet Union to spend itself into bankruptcy. In this sense, SDI might be seen as the most effective weapon of the cold war. In the administration of George W. Bush, beginning in 2001, SDI was revived, and the USAF resumed development and testing of components of the system.

Strategic Defense Initiative Organization (SDIO)

SDI’s formal beginnings date from NSDD 119 signed by President Reagan on January 6, 1984 and placed the program under DOD’s leadership. Key elements of this document reflecting SDI’s raison d’etre include DOD managing the program and the SDI program manager reporting directly to the secretary of Defense, SDI placing primary emphasis on technologies involving nonnuclear components, and research continuing on nuclear-based strategic defense concepts as a hedge against a Soviet ABM breakout (Feycock 2006, 216).

On March 27, 1984, Secretary of Defense Casper Weinberger (1917-2006) appointed Air Force lieutenant general James Abrahamson (1933-) as the first director of the Strategic Defense Initiative Organization (SDIO), which was given responsibility for developing SDI. Weinberger signed the SDIO charter on April 24, 1984, giving Abrahamson extensive freedom in managing the program (Federation of American Scientists n. d., 5).

A May 7, 1984 memorandum from Deputy Secretary of Defense William H. Taft IV (1945-) to the secretary of the Air Force provided additional direction and guidance on the mission and program management of SDI’s boost and space surveillance tracking systems. SDI attributes mandated in this document included the ability to provide ballistic missile TW/AA; satellite attack warning/verification (SAW/V); satellite targeting for U. S. ASAT operations; and SDI surveillance, acquisition, tracking and kill assessment SATKA. Additional program mandates included program plans showing specific requirements, critical milestones, and costs along with alternative means of achieving these objectives (Spires 2004, 2:1130-1131).

SDIO was organized into five program areas covering SATKA, Directed Energy Weapons (DEW) Technology, Kinetic Energy Weapons (KEW) Technology, Systems Concept/Battle Management (SC/BM), and Survivability, Lethality, and Key Technologies (SLKT). SATKA program objectives included investigating sensing technologies capable of providing information to activate defense systems, conduct battle management, and assess force status before and during military engagements. A key SATKA challenge was developing the ability to discriminate among hostile warheads, decoys, and chaff during midcourse and early terminal phases of their trajectories (DiMaggio et al. 1986, 6-7).

The DEW program sought to examine the potential for using laser and/or particle beams for ballistic missile defense. DEW can deliver destructive energy to targets near or at light speed and are particularly attractive for using against missiles as they rise through the atmosphere. Successfully engaging missiles during these flight stages can allow missiles to be destroyed before they release multiple independently targeted warheads. Relevant weapon concepts studied under DEW included space-based lasers, ground-beam lasers using orbiting relay mirrors, space-based neutral particle beams, and endoatmospheric charged particle beams guided by low-power lasers (DiMaggio et al. 1986, 7-8).

KEW program applications involved studying ways of accurately directing fairly light objects at high speed to intercept missiles or warheads during any flight phase. Technologies being investigated by this program include space-based chemically launched projectiles with homing devises and space-based electromagnetic rail guns (DiMaggio et al. 1986, 8).

Research pertinent to SC/BM programs explores defensive architecture options allowing for deployment of extremely responsive, reliable, survivable, and cost-effective battle management and command, control, and communications systems. Factors examined in such programs must include mission objectives, offensive threat analyses, technical capabilities, risk, and cost (DiMaggio et al. 1986, 8-9).

SLKT program components seek to support research and technology development for improving system effectiveness and satisfying system logistical needs. Such survivability and lethality study efforts seek to produce information about expected enemy threats and the ability of SDI systems to survive efforts to destroy or defeat it. Relevant SLKT supporting technology research areas include space transportation and power, orbital maintenance, and energy storage and conversion. Pertinent SDI logistical research, under program auspices, is crucial for evaluating and reducing deployment and operational costs (DiMaggio et al. 1986, 10).

SDI achieved significant program and technical accomplishments over the next decade. A June 1984 Homing Overlay Experiment achieved the first kinetic kill intercept of an ICBM reentry vehicle, SDIO established an Exoatmospheric Reentry Vehicle Interceptor Subsystem (ERIS) Project Office in July 1984, and a High Endoatmospheric Defense Interceptor (HEDI) Project Office in October 1984. March 1985 saw Weinberger invite allied participation in U. S. ballistic missile defense programs, and in October 1985 National Security Advisor Robert McFarlane (1937-) introduced a controversial “broad interpretation” of the ABM Treaty, which asserted that certain space-based and mobile ABM systems and components such as lasers and particle beams could be developed and tested but not deployed (U. S. Army Space and Missile Defense Command, n. d. 2-3; U. S. Congress, Senate Committee on Armed Services, Subcommittee on Theater and Strategic Nuclear Forces 1986, 136-144).

During August 1986 the Army’s vice chief of staff approved the U. S. Army Strategic Defense Command theater missile defense research program, and the following month this official also directed the establishment of a Joint Theater Missile Defense Program Office in Huntsville, Alabama to coordinate Army theater missile defense requirements. May 1987 saw the successful kinetic energy intercept by the Flexible Lightweight Agile Guided Experiment of a Lance missile, which was a high-velocity, low-altitude target. In July 1988 Hughes Aircraft delivered the Airborne Surveillance Testbed Sensor to the military, which was the most complex long-wavelength infrared sensor built at that time.

February 1989 saw President George H. W. Bush (1924-2018) announce that his administration would continue SDI developments; a June 1989 national defense strategy review concluded that SDI program goals were sound; SDIO approved an Endoatmospheric/Exoatmospheric Interceptor program during summer 1990 to succeed HEDI; the first successful ERIS intercept took place during January 1991; and in June 1991 there were successful tests of the lightweight exoatmospheric projectile integrated vehicle strap down and free flight hover (U. S. Army Space and Missile Defense Command n. d., 3-4; U. S. Department of Defense 1989, 1-31).

SDI was able to achieve significant accomplishments during the 1980s and early 1990s as the list above demonstrates. The program remained controversial during its first decade before SDIO was renamed the Ballistic Missile Defense Organization (BMDO) by the Clinton administration on June 14, 1994 (U. S. Department of Defense 1994, 1).

Program expenditures remained a source of controversy for some congressional appropriators. SDIO’s budget, according to a 1989 DOD report, was $3.8 billion for fiscal year 1989 representing 0.33% of the $282.4 defense budget for that year (U. S. Department of Defense 1989, 27). A 1992 congressional review of SDIO expenditures quantified that the organization had received $25 billion since 1984 for ballistic missile defense system research and development and that the Bush administration’s proposed fiscal year 1992 budget estimated system acquisition costs to be $46 billion (U. S. General Accounting Office 1992(a), 10).

Strategic Defense Initiative [SDI] II

Changing SDI program objectives complicated SDIO’s work and operational efficiency. SDI was originally intended to provide a massive system for defending the United States against Soviet ballistic missile attacks. During 1987 program objectives shifted from defending against massive missile strikes to deterring such strikes. The 1990 introduction of the Brilliant Pebbles space-based interceptor (see next entry) caused SDIO to change organizational direction again. A number of organizational realignments were implemented during September 1988 such as adding a chief of staff to oversee SDIO activities; adding a chief engineer to ensure multiple engineering tasks and analysis received top-level attention; and the creation of a Resource Management Directorate, which merged Comptroller and Support Services Directorates in an effort to enhance management efficiency (U. S. General Accounting Office 1992(a), 2; Federation of American Scientists n. d., 8).

Operation Desert Storm also heralded important changes in SDIO program activities. The use of Patriot missile batteries against Iraqi Scud missiles during this 1991 conflict achieved some success but with significant attending controversy over how successful the Patriot system had actually performed (U. S. Congress, House Committee on Government Operations, Subcommittee on Legislation and National Security 1993; Snodgrass 1993).

During his January 29, 1991 State of the Union address to Congress as this conflict raged, President Bush announced another SDI shift to the concept called Global Protection Against Limited Strikes (GPALS) as reflected in the following statement “. . . I have directed that the Strategic Defense Initiative program be refocused on providing protection from limited ballistic missile strikes, whatever their source. Let us pursue an SDI pro- gram that can deal with any future threat to the United States, to our forces overseas and to our friends and allies” (Public Papers of the Presidents of the United States 1991, 78).

This shift to GPALS came about as a result of a perceived decline in the Soviet missile threat and the emergence of tactical ballistic missile threats from Iraq and other third world countries. GPALS would have two ground-based and one space-based segment. One of the ground-based components would consist of sensors and interceptors to protect U. S. and allied missile forces overseas from missile attack. An additional ground- based segment would protect the United States from accidental or limited attacks of up to 200 warheads. GPALS spaced-based component would help detect and intercept missiles and warheads launched from anywhere in the world. SDIO sought to integrate these three segments to provide mutual coordinated support and required that each of these entities be designed to work together using automated data processing and communication networks (U. S. General Accounting Office 1992(a), 2-3).

This governmental emphasis on localized theater, as opposed to global strategic missile defense, was also reflected in the fiscal year 1991 congressional conference committee report on the defense budget issued October 24, 1990. This legislation called for the secretary of Defense to establish a centrally managed theater missile defense program funded at $218,249,000, required DOD to accelerate research and development on theater and tactical ballistic missile defense systems, and called for the inclusion of Air Force and Navy requirements in such a plan and the participation of these services (U. S. Congress, House Committee on Appropriations 1990, 117-118).

SDI and the concept of ballistic missile defense continued generating controversy throughout its first decade. Although SDIO was able to achieve relatively viable funding and enough operational successes to retain sufficient political support within DOD and in Congress to persevere as an organization, its organizational mission focus never remained constant. Contentiousness over whether there was even a need for SDI or ballistic missile defense was reflected in the following 1991 statements before House Government Operations Committee oversight hearings on SDI.

Opponents of SDI such as Federation of American Scientist’s Space Policy Project director John Pike claimed that ballistic missile threats to the United States were examples of hyperbolic rhetoric, that SDI was too expensive, had numerous technical problems, and that its deployment could jeopardize international arms control. Pike described SDI as being a “Chicken Little” approach to existing threats, which would cost more than $100 billion instead of current projections of $40 billion. He also contended that SDI had significant computing and software problems, that its deployment would end the ABM Treaty and imperil arms control progress, and there was no compelling reason to deploy SDI based on the existing strategic environment (U. S. Congress, House Committee on Governmental Operations, Legislation and National Security Subcommittee 1992, 194).

Proponents of SDI such as Keith Payne of the National Institute for Public Policy emphasized how the Iraqi use of Scud missiles during Operation Desert Storm had drastically changed Cold War strategic assumptions about how ballistic missiles might be used in future military conflicts. These proponents stressed the threat to civilians from missiles that could carry chemical warheads, how normal life ended in cities threatened by Iraqi Scud attacks, how Iraqi conventionally armed missile attacks during the Iran-Iraq War caused nearly 2,000 deaths, forced the evacuation of urban areas like Tehran with ruinous economic consequences, and warned that such events could happen to U. S. and allied metropolitan areas due to ballistic missile proliferation (U. S. Congress, House Committee on Government Operations, Subcommittee on Legislation and National Security 1992, 284).

SDI supporters further stressed how the presence of ballistic missiles equipped with weapons of mass destruction in the hands of third world countries such as Iraq could drastically reduce the flexibility of U. S. leaders in responding to such threats. Examples of this reduced flexibility would involve U. S. leaders having to assess the possibility of third party ballistic missile strikes against U. S. forces, allies, or U. S. population centers, sufficient to limit the president’s freedom of action to respond; emerging ballistic missile threats could have a debilitating effect on the U. S. capability to establish allied coalitions and respond to aggression as it did in Operation Desert Storm; and activities such as escorting threatened commercial shipping through hostile waters during the Iran-Iraq War or militarily evacuating U. S. citizens from foreign hot spots could become increasingly dangerous. Payne and other missile defense supporters stress that such defenses enable the United States to maintain the credibility of its overseas security commitments and encourage the belief that the United States will not be deterred from defending its national interests and allies (U. S. Congress, House Committee on Government Operations, Subcommittee on Legislation and National Security 1992, 284-285).

SDIO continued its activities as the Clinton administration began in 1993. Ballistic missile defense was not high on the national security priorities of this administration as it took office (Lindsay and O’Hanlon 2001, 87). SDIO’s initial institutional incarnation came to an end with DOD Directive 5134.9 on June 14, 1994, which established the BMDO as the organizational focal point for U. S. ballistic missile defense efforts. The now preponderant emphasis on developing defenses against theater ballistic missile threats, while also adhering to the ABM Treaty, was reflected in BMDO’s mission, whose characteristics included deploying an effective and rapidly mobile theater missile defense system to protect forward-deployed and expeditionary components of U. S. and allied armed forces; defending the U. S. homeland against limited ballistic missile attacks; demonstrating advanced technology options for enhanced missile defense systems including space-based defenses and associated sensors; making informed decisions on development, production, and deployment of such systems in consultation with U. S. allies; and adhering to existing international agreements and treaty obligations while using non-nuclear weapon technologies (U. S. Department of Defense 1994, 1-2). SDI may have been conceived and initially presented with idealistic fervor, but its inception was driven by profound and substantive dissatisfaction with the military and moral predicament of the United States being unable to defend its population and military interests against hostile ballistic missile attacks. SDI and its successor programs have survived and evolved into contemporary national missile defense programs because of their ability to pragmatically adapt to prevailing political, economic, and military environments facing the United States and its national security interests (Clagett 1996).

ASAT (antisatellite) weapons

Concern over a growing Soviet ASAT program caused the Reagan administration to begin efforts to remove congressional restrictions on testing ASAT capability in space. This concern resulted in the February 6, 1987 issuance of NSDD 258 in which DOD and the Air Force requested funding to conduct relevant research and development efforts in this area and that further study of long-range U. S. ASAT requirements should continue (U. S. National Security Council 2006(a), 255-256).

A final noteworthy Reagan administration space policy document was NSDD 293 on national space policy issued January 5, 1988. This document reaffirmed that the United States was committed to peacefully exploring and using outer space and that peaceful purposes allowed for military and intelligence-related activities in pursuit of national security and other goals, that the United States would pursue these military and intelligence activities to support its inherent self-defense rights and defense commitments to allies, that the United States rejected the claims of other nations to sovereignty over space or celestial bodies, that there can be limits on the fundamental right of sovereign nations to acquire data from space, and that the United States considers other national space systems to have the right to pass through and conduct space operations without interference (U. S. National Security Council 2006(b), 13-14).

This document went on to outline four basic DOD space mission areas including space support, force enhancement, space control, and force application. Space support guidelines stressed that military and intelligence space sectors could use manned and un- manned launch systems as determined by specific DOD or intelligence mission requirements. Force enhancement guidelines stressed that DOD would work with the intelligence community to develop, operate, and maintain space systems and develop appropriate plans and structures for meeting the operational requirements of land, sea, and air forces through all conflict levels. Space control guidelines stressed that DOD would develop, operate, and maintain enduring space systems to ensure freedom of action in space and deny such mobility to adversaries, that the United States would develop and deploy a comprehensive ASAT capability including both kinetic and directed energy weapons, and that DOD space programs would explore developing a space assets survivability enhancement program emphasizing long-term planning for future requirements. Where force application was concerned, this document proclaimed that DOD would, consistent with treaty requirements, conduct research, development, and planning to be prepared to acquire and deploy space weapons systems if national security conditions required them (U. S. National Security Council 2006(b), 15-16).

Projecting force from space was a particularly significant new facet of U. S. military space policy asserted in this document. This statement also reflected a belief in many governmental sectors that space was comparable to air, land, and sea war-fighting environments and that space combat operations should be pursued to defend national interests and enhance national security. NSDD 293 culminated a period of significant growth in U. S. military space policy during the Reagan presidency. This administration saw AFSPACOM established in 1982 to consolidate space activities and link space-related research and development with operational space users. Army and Navy space commands were also created during this time and USSPACECOM was established in 1985 as a unified multi – service space command. Additional Reagan administration developments in military space policy included establishing a Space Technology Center at New Mexico’s Kirtland Air Force Base; forming a DOD Space Operations Committee, elevating NORAD’s commander in chief to a four-star position and broadening that position’s space responsibilities; creating a separate Air Force Space Division and establishing a deputy commander for Space Operations; constructing a consolidated Space Operations Center; creating a Directorate for Space Operations in the Office of the Deputy Chief of Staff/Plans and Operations in Air Force headquarters; establishing SDIO; and establishing a space operations course at the Air Force Institute of Technology (Lambakis 2001, 229-230).

Broader Implications of the Strategic Defense Initiative

Though compelling for its parsimonious logic, mutually assured destruction came at a cost. By holding each other’s population hostage to nuclear annihilation, the superpowers reinforced preexisting ideological and geopolitical hostility. From this angle, the underlying logic of mutual vulnerability violated civilized norms, a necessary evil in the absence of viable policy alternatives such as disarmament or strategic defense. This calculus changed after the Reagan administration took office.

Long before assuming the Oval Office, Ronald Reagan developed an interest in strategic defense, stimulated in part by visits to Lawrence Livermore Laboratory and the North American Air Defense Command. Reagan’s distaste for mutually assured destruction, combined with technological advances, convinced him to embark on a new strategic path, against the advice of some of his closest advisors. After deeming mutually assured destruction immoral in a nationally televised address on March 23, 1983, President Reagan challenged the scientific community “to give us the means of rendering these nuclear weapons impotent and obsolete.” Some critics immediately challenged the president on purely technical grounds, saying it would never work. Others deemed it provocative, believing the Soviets would conclude that the Strategic Defense Initiative (SDI) was a cover for the United States to achieve a first-strike capability.

Though clearly aspirational in the near term, SDI jarred the Soviets. So did Reagan’s later refusal to consider SDI a bargaining chip at Reykjavik, Iceland, talks in October 1986. After spending decades and hundreds of billions building their land-based ICBM force, Moscow found itself at the wrong end of a cost-imposition strategy. To be sure, the ensuing SDI debate included lots of talk about the potential for the Soviets to use cheap countermeasures to defeat an American missile defense system. Since Reagan’s initial concept of SDI did not specify technologies in advance, these scenarios reflected far more speculation than analysis. This much was clear: The United States changed the strategic debate to favor its own technological potential at the expense of the Soviet Union.

The Reagan administration’s newfound commitment to strategic defense exploited the Soviet’s relative disadvantage in microelectronics and computer technology. The Soviet leadership grasped the implications of this revolution even before Reagan announced his SDI initiative. In the past, the Soviets’ wide-ranging espionage efforts offset some of the US technological advantages, as Soviet agents pilfered certain weapons designs, including, most spectacularly, that of the atomic weapon. But the microelectronics-based revolution was too broad and too deep for the Soviets to steal their way to technological equivalency. Equally problematic, the Soviet’s centralized economic system lacked the ability to create disruptive technologies of its own, let alone match those of the United States. In this case, as it usually does, entrepreneurship handily beat centralized planning in generating technological innovation.

1812 – Russia’s War Machine I

Apart from the Romanovs, the greatest beneficiaries of eighteenth-century Russia’s growing wealth were the small group of families who dominated court, government and army in this era and formed the empire’s aristocratic elite. Some of these families were older than the Romanovs, others were of much more recent origin, but by Alexander I’s reign they formed a single aristocratic elite, united by wealth and a web of marriages. Their riches, social status and positions in government gave them great power. Their patron–client networks stretched throughout Russia’s government and armed forces. The Romanovs themselves came from this aristocratic milieu. Their imperial status had subsequently raised them far above mere aristocrats, and the monarchs were determined to preserve their autonomy and never allow themselves to be captured by any aristocratic clique. Nevertheless, like other European monarchs they regarded these aristocratic magnates as their natural allies and companions, as bulwarks of the natural order and hierarchy of a well-run society.

The aristocracy used a number of crafty ways to preserve their power. In the eighteenth century they enlisted their sons in Guards regiments in childhood. By the time they reached their twenties, these sprigs of the aristocracy used their years of ‘seniority’ and the privileged status of the Guards to jump into colonelcies in line regiments. Catherine the Great’s son, Paul I, who reigned from 1796 to 1801, stopped this trick but very many of the aristocrats in senior posts in 1812–14 had benefited from it. Even more significant was the use made by the aristocracy of positions at court. Though mostly honorific, these positions allowed young gentlemen of the bedchamber (Kammerjunker) and lords in waiting (Kammerherr) to transfer into senior positions in government of supposedly equivalent rank.

In the context of eighteenth-century Europe there was nothing particularly surprising about this. Young British aristocrats bought their way rapidly up the military hierarchy, sat in Parliament for their fathers’ pocket boroughs and sometimes inherited peerages at a tender age. Unlike the English, Russian aristocrats did not control government through their domination of Parliament. A monarch who bungled policy or annoyed the Petersburg elite too deeply could be overthrown and murdered, however. Paul I once remarked that there were no Grands Seigneurs in Russia save men who were talking to the emperor and even their status lasted only as long as the emperor deigned to continue the conversation. He was half correct: Russian magnates were more subservient and less autonomous than their equivalents in London or Vienna. But he was also half wrong and paid for his miscalculation with his life in 1801, when he was murdered by members of the aristocracy, outraged by his arbitrary behaviour, led by the governor-general of Petersburg, Count Peter von der Pahlen.

The Russian aristocracy and gentry made up the core of the empire’s ruling elite and officer corps. But the Romanovs ruled over a multi-ethnic empire. They allied themselves to their empire’s non-Russian aristocracies and drew them into their court and service. The most successful non-Russian aristocrats were the German landowning class in the Baltic provinces. By one conservative estimate 7 per cent of all Russian generals in 1812 were Baltic German nobles. The Balts partly owed their success to the fact that, thanks to the Lutheran Church and the eighteenth-century Enlightenment in northern Europe, they were much better educated than the average Russian provincial noble.

There was nothing unusual at the time in an empire being ruled by diverse and alien elites. In its heyday, the Ottoman ruling elite was made up of converted Christian slaves. The Ching and Mughal empires were run by elites who came from beyond the borders of China or the subcontinent. By these standards, the empire of the Romanovs was very Russian. Even by European standards the Russian state was not unique. Very many of the Austrian Empire’s leading soldiers and statesmen came from outside the Habsburgs’ own territories. None of Prussia’s three greatest heroes in 1812–14 – Blücher, Scharnhorst or Gneisenau – was born a Prussian subject or began his career in the Prussian army.

It is true that there were probably more outsiders in the Russian army than in Austria or Prussia. European immigrants also stood out more sharply in Petersburg than in Berlin or Vienna. In the eighteenth century many European soldiers and officials had entered Russian service in search of better pay and career prospects. In Alexander’s reign they were joined by refugees fleeing the French Revolution or Napoleon. Above all, European immigrants filled the gap created by the slow development of professional education or a professional middle class in Russia. Doctors were one such group. Even in 1812 there were barely 800 doctors in the Russian army, many of them of German origin. Military engineers were also in short supply. In the eighteenth century Russian engineers had been the younger brothers of the artillery and came under its jurisdiction. Though they gained their independence under Alexander, there were still too few trained engineer officers trying to fulfil too diverse a range of duties and Russia remained in search of foreign experts whom it might lure into its service. On the eve of 1812 the two most senior Russian military engineers were the Dutchman Peter van Suchtelen and the German Karl Oppermann.

An even more important nest of foreigners was the quartermaster-general’s department, which provided the army’s general staff officers. Almost one in five of the ‘Russian’ staff officers at the battle of Borodino were not even subjects of the tsar. Fewer than half had Slav surnames. The general staff was partly descended from the bureau of cartography, a very specialized department which required a high level of mathematical skill. This ensured that it would be packed with foreigners and non-Russians. As armies grew in size and complexity in the Napoleonic era, the role of staffs became crucial. This made it all the more galling for many Russians that so large a proportion of their staff officers had non-Russian names. In addition, Napoleon’s invasion in 1812 set off a wave of xenophobia in Russia, which sometimes targeted ‘foreigners’ in the Russian army, without making much distinction between genuine foreigners and subjects of the tsar who were not ethnic Russians. Without its non-Russian staff officers the empire could never have triumphed in 1812–14, however. Moreover, most of these men were totally loyal to the Russian state, and their families usually in time assimilated into Russian society. These foreign engineers and staff officers also helped to train new generations of young Russian officers to take their places.

For the tsarist state, as for all the other great powers, the great challenge of the Napoleonic era was to mobilize resources for war. There were four key elements to what one might describe as the sinews of Russian power. They were people, horses, military industry and finance. Unless the basic strengths and limitations of each of these four elements is grasped it is not possible to understand how Russia fought these wars or why she won them.

Manpower was any state’s most obvious resource. At the death of Catherine II in 1797 the population of the Russian empire was roughly 40 million. This compared with 29 million French subjects on the eve of the Revolution and perhaps 22 million inhabitants of the Habsburgs’ lands at that time. The Prussian population was only 10.7 million even in 1806. The United Kingdom stood somewhere between Prussia and the larger continental powers. Its population, including the Irish, was roughly 15 million in 1815, though Indian manpower was just becoming a factor in British global might. By European standards, therefore, the Russian population was large but it was not yet vastly greater than that of its Old Regime rivals and it was much smaller than the human resources controlled by Napoleon. In 1812 the French Empire, in other words all territories directly ruled from Paris, had a population of 43.7 million. But Napoleon was also King of Italy, which had a population of 6.5 million, and Protector of the 14 million inhabitants of the Confederation of the Rhine. Some other territories were also his to command: most notably from the Russian perspective the Duchy of Warsaw, whose population of 3.8 million made a disproportionate contribution to his war effort in 1812–14. A mere listing of these numbers says something about the challenge faced by Russia in these years.

From the state’s perspective the great point about mobilizing the Russian population was that it was not merely numerous but also cheap. A private in Wellington’s army scarcely lived the life of a prince but his annual pay was eleven times that of his Russian equivalent even if the latter was paid in silver kopeks. In reality the Russian private in 1812 was far more likely to be paid in depreciating paper currency worth one-quarter of its face value. Comparisons of prices and incomes are always problematic because it is often unclear whether the Russian rubles cited are silver or paper, and in any case the cost of living differed greatly between Russia and foreign countries, above all Britain. A more realistic comparison is the fact that even in peacetime a British soldier received not just bread but also rice, meat, peas and cheese. A Russian private was given nothing but flour and groats, though in wartime these were supplemented by meat and vodka. The soldiers boiled their groats into a porridge which was their staple diet.

A Russian regiment was also sometimes provided not with uniforms and boots but with cloth and leather from which it made its own clothing and footwear. Powder, lead and paper were also delivered to the regiments for them to turn into cartridges. Nor was it just soldiers whose labour was used for free by the state. A small minority of conscripts were sent not to the army but to the mines. More importantly, when Peter the Great first established the ironworks which were the basis of Russian military industry he assigned whole villages to work in them in perpetuity. He did the same with some of the cloth factories set up to clothe his army. This assigned labour was all the cheaper because the workers’ families retained their farms, from which they were expected to feed themselves.

So long as all European armies were made up of long-serving professionals the Russian military system competed excellently. The system of annual recruit levies allowed the Russian army to be the largest and cheapest in Europe without putting unbearable pressure on the population. Between 1793 and 1815, however, changes began to occur, first in France and later in Prussia, which put a question mark against its long-term viability. Revolutionary France began to conscript whole ‘classes’ of young men in the expectation that once the war was over they would return to civilian life as citizens of the new republic. In 1798 this system was made permanent by the so-called Loi Jourdain, which established a norm of six years’ service. A state which conscripted an entire age group for a limited period could put more men in the ranks than Russia. In time it would also have a trained reserve of still relatively young men who had completed their military service. If Russia tried to copy this system its army would cease to be a separate estate of the realm and the whole nature of the tsarist state and society would have to change. A citizen army was barely compatible with a society based on serfdom. The army would become less reliable as a force to suppress internal rebellion. Noble landowners would face the prospect of a horde of young men returning to the countryside who (if existing laws remained) were no longer serfs and who had been trained in arms.

In fact the Napoleonic challenge came and went too quickly for the full implications of this threat to materialize. Temporary expedients sufficed to overcome the emergency. In 1807 and again in 1812–14 the regime raised a large hostilities-only militia despite the fears of some of its own leaders that this would be useless in military terms and might turn into a dangerous threat to the social order. When the idea of a militia was first mooted in the winter of 1806–7, Prince I. V. Lopukhin, one of Alexander’s most senior advisers, warned him that ‘at present in Russia the weakening of ties of subordination to the landowners is more dangerous than foreign invasion’. The emperor was willing to take this risk and his judgement proved correct. The mobilization of Russian manpower through a big increase in the regular army and the summoning of the militia just sufficed to defeat Napoleon without requiring fundamental changes in the Russian political order.

Next only to men as a military resource came horses, with which Russia was better endowed than any other country on earth. Immense herds dwelt in the steppe lands of southern Russia and Siberia. These horses were strong, swift and exceptionally resilient. They were also very cheap. One historian of the Russian horse industry calls these steppe horses ‘a huge and inexhaustible reserve’. The closest the Russian cavalry came to pure steppe horses was in its Cossack, Bashkir and Kalmyk irregular regiments. The Don Cossack horse was ugly, small, fast and very easy to manoeuvre. It could travel great distances in atrocious weather and across difficult terrain for days on end and with minimal forage in a way that was impossible for regular cavalry. At home the Cossack horse was always out to grass. In winter it would dig out a little trench with its front hoofs to expose roots and grasses hidden under the ice and snow. Cossacks provided their own horses when they joined the army, though in 1812–14 the government did subsidize them for animals lost on campaign. Superb as scouts and capable of finding their way across any terrain even in the dark, the Cossacks also spared the Russian regular light cavalry many of the duties which exhausted their equivalents in other armies: but the Russian hussar, lancer and mounted jaeger regiments also themselves had strong, resilient, cheap and speedy horses with a healthy admixture of steppe blood.

Traditionally the medium (dragoon) and heavy (cuirassier) horses had been a much bigger problem. In fact on the eve of the Seven Years War Russia had possessed no viable cuirassier regiments and even her dragoons had been in very poor shape. By 1812, however, much had changed, above all because of the huge expansion of the Russian horse-studs industry in the last decades of the eighteenth century. Two hundred and fifty private studs existed by 1800, almost all of which had been created in the last forty years. They provided some of the dragoon and most of the cuirassier horses. British officers who served alongside the Russians in 1812–14 agreed that the heavy cavalry was, in the words of Sir Charles Stewart, ‘undoubtedly very fine’. Sir Robert Wilson wrote that the Russian heavy cavalry ‘horses are matchless for an union of size, strength, activity and hardiness; whilst formed with the bulk of the British cart-horse, they have so much blood as never to be coarse, and withal are so supple as naturally to adapt themselves to the manege, and receive the highest degree of dressing’.

If there was a problem with the Russian cuirassier horse it was perhaps that it was too precious, at least in the eyes of Alexander I. Even officially these heavy cavalry horses cost two and a half times as much as a hussar’s mount, and the horses of the Guards cuirassiers – in other words the Chevaliers Gardes and Horse Guard regiments – cost a great deal more. Their feeding and upkeep were more expensive than that of the light cavalry horses and, as usual with larger mounts, they had less endurance and toughness. Since they came from studs they were also much harder to replace. Perhaps for these reasons, in 1813–14 the Russian cuirassiers were often kept in reserve and saw limited action. Alexander was furious when on one occasion an Austrian general used them for outpost duty and allowed them to sustain unnecessary casualties.

Russian military industry could usually rely on domestic sources for its raw materials with some key exceptions. Much saltpetre needed to be imported from overseas and so too did lead, which became an expensive and dangerous weakness in 1807–12 when the Continental System hamstrung Russian overseas trade. Wool for the army’s uniforms was also a problem, because Russia only produced four-fifths of the required amount. There were also not enough wool factories to meet military demand as the army expanded after 1807. The truly crucial raw materials were iron, copper and wood, however, and these Russia had in abundance. At the beginning of Alexander’s reign Russia was still the world’s leading iron producer and stood second only to Britain in copper. Peter the Great had established the first major Russian ironworks to exploit the enormous resources of iron ore and timber in the Urals region, on the borders of Europe and Siberia. Though Russian metallurgical technology was beginning to fall well behind Britain, it was still more than adequate to cover military needs in 1807–14. The Ural region was far from the main arms-manufacturing centres in Petersburg and in the city of Tula, 194 kilometres south of Moscow, but efficient waterways linked the three areas. Nevertheless, any arms or ammunition produced in the Urals works would not reach armies deployed in Russia’s western borderlands for over a year.

Arms production fell into two main categories: artillery and handheld weapons. The great majority of Russian iron cannon were manufactured in the Alexander Artillery Works in Petrozavodsk, a small town in Olonets province north-east of Petersburg. They were above all designed for fortresses and for the siege train. Most of the field artillery came from the St Petersburg arsenal: it produced 1,255 new guns between 1803 and 1818. The technology of production was up to date in both works. In the Petersburg Arsenal a steam-powered generator was introduced in 1811 which drove all its lathes and its drilling machinery. A smaller number of guns were produced and repaired in the big depots and workshops in Briansk, a city near the border of Russia and Belorussia. Russian guns and carriages were up to the best international standards once Aleksei Arakcheev’s reforms of the artillery were completed by 1805. The number of types of gun was reduced, equipment was standardized and lightened, and careful thought went into matching weapons and equipment to the tactical tasks they were intended to fulfil. The only possible weakness was the Russian howitzers, which could not be elevated to the same degree as the French model and therefore could not always reach their targets when engaged in duels with their French counterparts. On the other hand, thanks to the lightness of their carriages and the quality of their horses the Russian horse artillery was the most mobile and flexible on the battlefield by 1812–14.

The Russian Army of 1812

The Russian Army of 1812

Russian Artillery of the Napoleonic War

1812 – Russia’s War Machine II

The situation as regards handheld firearms was much less satisfactory. Muskets were produced in three places: the Izhevsk works in Viatka province near the Urals turned out roughly 10 per cent of all firearms manufactured in 1812–14: many fewer were produced at the Sestroretsk works 35 kilometres from Petersburg, though Sestroretsk did play a bigger role in repairing existing weapons; the city of Tula was therefore by far the most important source of muskets in 1812–14.

The Tula state arms factory had been founded by Peter the Great in 1712 but production was shared between it and private workshops. In 1812, though the state factory produced most of the new muskets, six private entrepreneurs also supplied a great many. These entrepreneurs did not themselves own factories, however. They met state orders partly from their own rather small workshops but mostly by subcontracting the orders to a large number of master craftsmen and artisans who worked from their own homes. The war ministry complained that this wasted time, transport and fuel. The state factory was itself mostly just a collection of smallish workshops with production often by hand. The labour force was divided into five crafts: each craft was responsible for one aspect of production (gun barrels, wooden stocks, firing mechanisms, cold steel weapons, all other musket parts). Producing the barrels was the most complicated part of the operation and caused most of the delays, partly because skilled labour was in short supply.

The biggest problem both in the factory and the private workshops was out-of-date technology and inadequate machine tools. Steam-powered machinery was only introduced at the very end of the Napoleonic Wars and in any case proved a failure, in part because it required wood for fuel, which was extremely expensive in the Tula region. Water provided the traditional source of power and much more efficient machinery was introduced in 1813 which greatly reduced the consumption of water and allowed power-based production to continue right through the week. Even after the arrival of this machinery, however, shortage of water meant that all power ceased for a few weeks in the spring. In 1813, too, power-driven drills for boring the musket barrels were introduced: previously this whole job had been done by hand by 500 men, which was a serious brake on production. A Russian observer who had visited equivalent workshops in England noted that every stage in production there had its own appropriate machine tools. In Tula, on the contrary, many specialist tools, especially hammers and drills, were not available: in particular, it was almost impossible to acquire good steel machine tools. Russian craftsmen were sometimes left with little more than planes and chisels.

Given the problems it faced, the Russian arms industry performed miracles in the Napoleonic era. Despite the enormous expansion of the armed forces in these years and heavy loss of weapons in 1812–14, the great majority of Russian soldiers did receive firearms and most of them were made in Tula. These muskets cost one-quarter of their English equivalents. On the other hand, without the 101,000 muskets imported from Britain in 1812–13 it would have been impossible to arm the reserve units which reinforced the field army in 1813. Moreover, the problems of Russian machine tools and the tremendous pressures for speed and quantity made it inevitable that some of these muskets would be sub-standard. One British source was very critical of the quality of Tula muskets in 1808, for example. On the other hand, a French test of muskets’ firing mechanisms concluded that the Russian models were somewhat more reliable than their own, though much less so than the British and Austrian ones. The basic point was that all European muskets of this era were thoroughly unreliable and imperfect weapons. The Russian ones were undoubtedly worse than the British, and probably often worse than those of the other major armies too. Moreover, despite heroic levels of production in 1812–14 the Russian arms industry could never supply enough new-model muskets to ensure that all soldiers in a battalion had one type and calibre of firearm, though once again Russia’s was an extreme example of a problem common to all the continental armies.

Perhaps the quality of their firearms did exert some influence on Russian tactics. It would have been an optimistic Russian general who believed that men armed with these weapons could emulate Wellington’s infantry by deploying in two ranks and repelling advancing columns by their musketry. The shortcomings of the Russian musket were possibly an additional reason for the infantry to fight in dense formations supported by the largest ratio of artillery to foot-soldiers of any European army. However, although the deficiencies of the Russian musket may perhaps have influenced the way the army fought, they certainly did not undermine its viability on the battlefield. The Napoleonic era was still a far cry from the Crimean War, by which time the Industrial Revolution was beginning to transform armaments and the superiority of British and French rifled muskets over Russian smoothbores made life impossible for the Russian infantry.

The fourth and final element in Russian power was fiscal, in other words revenue. Being a great power in eighteenth-century Europe was very expensive and the costs escalated with every war. Military expenditure could cause not just fiscal but also political crisis within a state. The most famous example of this was the collapse of the Bourbon regime in France in 1789, brought on by bankruptcy as a result of the costs of intervention in the American War of Independence. Financial crisis also undermined other great powers. In the midst of the Seven Years War, for example, it forced the Habsburgs substantially to reduce the size of their army.

The impact of finance on diplomatic and military policy continued in the Napoleonic era. In 1805–6 Prussian policy was undermined by lack of funds to keep the army mobilized and therefore a constant threat to Napoleon. Similarly, in 1809 Austria was faced with the choice of either fighting Napoleon immediately or reducing the size of its army, since the state could not afford the current level of military expenditure. The Austrians chose to fight, were defeated, and were then lumbered with a war indemnity which crippled their military potential for years to come. An even more crushing indemnity was imposed on Prussia in 1807. In 1789 Russia had a higher level of debt than Austria or Prussia. Inevitably the wars of 1798–1814 greatly increased that debt. Unlike the Austrians or Prussians, in 1807 Russia did not have to pay an indemnity after being defeated by Napoleon. Had it lost in 1812, however, the story would have been very different.

Even without the burdens of a war indemnity Russia suffered financial crisis in 1807–14. Ever since Catherine II’s first war with the Ottomans (1768–74) expenditure had regularly exceeded revenue. The state initially covered the deficit in part by borrowing from Dutch bankers. By the end of the eighteenth century this was no longer possible: interest payments had become a serious burden on the treasury. In any case the Netherlands had been overrun by France and its financial markets were closed to foreign powers. Even before 1800 most of the deficit had been covered by printing paper rubles. By 1796 the paper ruble was worth only two-thirds of its silver equivalent. Constant war after 1805 caused expenditure to rocket. The only way to cover the cost was by printing more and more paper rubles. By 1812 the paper currency was worth roughly one-quarter of its ‘real’ (i.e. silver) value. Inflation caused a sharp rise in state expenditure, not least as regards military arms, equipment and victuals. To increase revenue rapidly enough to match costs was impossible. Meanwhile the finance ministry lived in constant dread of runaway inflation and the complete collapse in trust in the paper currency. Even without this, dependence on depreciating paper currency had serious risks for the Russian army’s ability to operate abroad. Some food and equipment had to be purchased in the theatre of operations, above all when operating on the territory of one’s allies, but no foreigner would willingly accept paper rubles in return for goods and services.

At the death of Catherine II in 1796 Russian annual revenue amounted to 73 million rubles or £11.7 million; if collection costs are included this sinks to £8.93 million, or indeed lower if the depreciating value of the paper ruble is taken into account. Austrian and Prussian revenues were of similar order: in 1800, for example, Prussian gross revenue was £8.65 million: in 1788 Austrian gross revenue had been £8.75 million. Even in 1789, with her finances in deep crisis, French royal revenue at 475 million francs or £19 million was much higher. Britain was in another league again: the new taxes introduced in 1797–9 raised her annual revenue from £23 million to £35 million.

If Russia nevertheless remained a formidable great power, that was because crude comparisons of revenue across Europe have many flaws. In addition, as we have seen in this chapter, the price of all key military resources was far cheaper in Russia than, for example, in Britain. Even in peacetime, the state barely paid at all for some services and goods. It even succeeded in palming off on the peasantry part of the cost of feeding most of the army, which was quartered in the villages for most of the year. In 1812 this principle was taken to an extreme, with massive requisitioning and even greater voluntary contributions. One vital reason why Russia had been victorious at limited cost in the eighteenth century was that it had fought almost all its wars on enemy territory and, to a considerable extent, at foreign expense. This happened too in 1813–14.

In 1812–14 the Russian Empire defeated Napoleon by a narrow margin and by straining to breaking point almost every sinew of its power. Even so, on its own Russia could never have destroyed Napoleon’s empire. For this a European grand alliance was needed. Creating, sustaining and to some extent leading this grand alliance was Alexander I’s greatest achievement. Many obstacles lay in Alexander’s path. To understand why this was the case and how these difficulties were overcome requires some knowledge of how international relations worked in this era.

Alexander I understood the power of regimental solidarity and tried to preserve it by ensuring that as far as possible officers remained within a single regiment until they reached senior rank. Sometimes this was a losing battle since officers could have strong personal motivation for transfer. Relatives liked to serve together. A more senior brother or an uncle in the regiment could provide important patronage. Especially in wartime, the good of the service sometimes required transferring officers to fill vacancies in other regiments. So too did the great expansion of the army in Alexander’s reign. Seventeen new regiments were founded between 1801 and 1807 alone: experienced officers needed to be found for them. In these circumstances it is surprising that more than half of all officers between the rank of ensign and captain had served in only one regiment, as had a great many majors. Particularly in older regiments such as the Grenadiers, the Briansk or Kursk infantry regiments, or the Pskov Dragoons the number of officers up to the rank of major who had spent their whole lives in the regiments was extremely high. As one might expect, the Preobrazhensky Guards, the senior regiment in the Russian army, was the extreme case, with almost all the officers spending their whole careers in the regiment. Add to this the fact that the overwhelming majority of Russian officers were bachelors and the strength of their commitment to their regiments becomes evident.

Nevertheless, the greatest bearers of regimental loyalty and tradition were the non-commissioned officers. In the regiments newly formed in Alexander’s reign, the senior NCOs arrived when the regiment was created and served in it for the rest of their careers. Old regiments would have a strong cadre of NCOs who had served in the unit for twenty years or more. In a handful of extreme cases such as the Briansk Infantry and Narva Dragoons every single sergeant-major, sergeant and corporal had spent his entire military life in the regiment. In the Russian army there was usually a clear distinction between the sergeant-majors (fel’dfebeli in the infantry and vakhmistry in the cavalry) on the one hand, and the ten times more numerous sergeants and corporals (unterofitsery) on the other. The sergeants and corporals were mostly peasants. They gained their NCO status as veterans who had shown themselves to be reliable, sober and skilled in peacetime, and courageous on the battlefield. Like the conscript body as a whole, the great majority of them were illiterate.

The sergeant-majors on the other hand were in the great majority of cases literate, though particularly in wartime some illiterate sergeants who had shown courage and leadership might be promoted to sergeant-major. Many were the sons of priests, but above all of the deacons and other junior clergy who were required to assist at Orthodox services. Most sons of the clergy were literate and the church could never find employment for all of them. They filled a key gap in the army as NCOs. But the biggest source of sergeant-majors were soldiers’ sons, who were counted as hereditary members of the military estate. The state set up compulsory special schools for these boys: almost 17,000 boys were attending these schools in 1800. In 1805 alone 1,893 soldiers’ sons entered the army. The education provided by the schools was rudimentary and the discipline was brutal but they did train many drummers and other musicians for the army, as well as some regimental clerks. Above all, however, they produced literate NCOs, imbued with military discipline and values from an early age. As befitted the senior NCO of the Russian army’s senior regiment, the regimental sergeant-major of the Preobrazhenskys in 1807, Fedor Karneev, was the model professional soldier: a soldier’s son with twenty-four years’ service in the regiment, an unblemished record, and a military cross for courage in action.

Although the fundamental elements of the Russian army were immensely strong, there were important weaknesses in its tactics and training in 1805. With the exception of its light cavalry, this made it on the whole inferior to the French. The main reason for this was that the French army had been in almost constant combat with the forces of other great powers between 1792 and 1805. With the exception of the Italian and Swiss campaigns of 1799–1800, in which only a relatively small minority of regiments participated, the Russian army lacked any comparable wartime experience. In its absence, parade-ground values dominated training, reaching absurd levels of pedantry and obsession at times. Partly as a result, Russian musketry was inferior to French, as was the troops’ skill at skirmishing. The Russians’ use of massed bayonet attacks to drive off skirmishers was costly and ineffective. In 1805–6 Russian artillery batteries were often poorly shielded against the fire of enemy skirmishers.

The army’s worst problems revolved around coordination above the level of the regiment. In 1805 there were no permanent units of more than regimental size. At Austerlitz, Russian and Austrian columns put together at the last moment manoeuvred far less effectively than the permanent French divisions. In 1806 the Russians created their own divisions but coordination on the battlefield remained a weakness. The Russian cavalry would have been hard pressed to emulate Murat’s massed charge at Eylau. The Russian artillery certainly could not have matched the impressive concentration and mobility of Senarmont’s batteries at Friedland.

Most important, however, were weaknesses in the army’s high command, meaning the senior generals and, above all, the supreme commanders. At this level the Russians were bound to be inferior to the French. No one could match a monarch who was also a military genius. Although the Russian military performance was hampered by rivalry among its generals, French marshals cooperated no better in Napoleon’s absence. When Alexander seized effective command from Kutuzov before Austerlitz the result was a disaster. Thoroughly chastened, Alexander kept away from the battlefield in 1806–7. This solved one problem but created another. In the absence of the monarch the top leader needed to be a figure who could command obedience both by his reputation and by being unequivocally senior to all the other generals. By late 1806, however, all the great leaders of Catherine’s wars were dead. Mikhail Kutuzov was the best of the remaining bunch but he had been out of favour since Austerlitz. Alexander therefore appointed Field-Marshal Mikhail Kamensky to command the army on the grounds of his seniority, experience and relatively good military record. When he reached the army Kamensky’s confused and even senile behaviour quickly horrified his subordinates. As one young general, Count Johann von Lieven, asked on the eve of the first serious battles with the French: ‘Is this lunatic to command us against Napoleon?’