Cold War Redux I

By MSW Add a Comment 22 Min Read

sdi-image02

The United States and the Soviet Union probably came closer to nuclear war in the early 1980s than they had at any time since the Cuban missile crisis. The last decade of the Cold War—sometimes called the Second Cold War—witnessed plenty of tough talk, with defense budgets to match. As they had throughout the postwar period, both nations invested heavily in pie-in-the-sky technologies as the key to national defense. Once again, scientists and engineers at federal, private, and university laboratories turned their attention to developing advanced nuclear weapons systems. And as before, large parts of this research took place behind closed doors, inaccessible to all but those with the highest security clearances. The military-industrial complex that Eisenhower had warned about was back, and with a vengeance.

Reagan’s Cold War, however, was not Eisenhower’s. American political leaders in the 1980s embraced a free market ideology that placed a priority on limiting the size of the federal government. Right-wing idealogues questioned the government’s role (aside from the essential functions of defense) in American life. It was no longer obvious to them, or to their allies in Congress, that the federal government should provide lavish funding for basic research in science and technology. In an era of globalization, policymakers encouraged the development of scientific fields that could compete in the global marketplace as well as in the marketplace of ideas.

These twin concerns—national defense and privatization—shaped the course of American policy for science and technology in the waning days of the Cold War.

Dual Threats

The 1970s and 1980s were a period of tremendous instability for international politics and the global economy. Within the United States, the ascendant conservative political movement called for a renewed commitment to anti-Communism and, with it, a reduction in the size and function of the federal government. At the same time, economic growth in Japan, Germany, and a unified Western Europe presented new challenges to American dominance of global markets. Visions of science and technology were once again front and center in shaping the American response to the dual threats of Communism and economic competitiveness.

The 1970s started with a thaw in the superpowers’ relationship, a process that the Americans referred to as détente. In February 1972, President Richard Nixon went to China, where he met with Chairman Mao; three months later, Nixon signed an ABM treaty with Soviet premier Leonid Brezhnev. The “Basic Principles” endorsed by both leaders stated that each nation would avoid “efforts to obtain unilateral advantage at the expense of the other,” meaning that neither would attempt to shift the global balance of power. Détente was, in effect, stalemate. Like most compromises, détente satisfied no one. Advocates of disarmament decried the missed opportunity to reduce nuclear arsenals; hawks viewed any agreement with the Soviet Union as capitulation.

By the time newly elected President Jimmy Carter turned in 1977 to implementing his campaign promise to reduce, or possibly even eliminate, nuclear weapons, détente had already begun to fall victim to volatile politics in the Middle East and Africa. The United States’ involvement in the 1973 Arab-Israeli War and the Soviet Union’s participation in a number of African civil wars, most importantly in Angola and Ethiopia, suggested that neither nation was quite willing to give up its role in dominating global politics. The Soviet Union’s invasion of Afghanistan in December 1979 announced the formal end of détente to any observers who might have missed its more gradual decline.

The renewal of Cold War tensions was accompanied by dramatic increases in defense spending. Under President Ronald Reagan, a conservative who described the Soviet Union as the “evil empire,” defense spending as a fraction of GDP rose by nearly a third, from a postwar low of 4.7 percent in 1978 to 6.2 percent in 1987. This increase mirrored a shift in the source of federal R&D monies. In 1979, the DOD supplied well under half (44 percent) of the federal investment in science and technology; by 1987, at the peak of the Reagan-era defense buildup, the DOD was once again bankrolling nearly two-thirds (63 percent) of scientific R&D. This number is even more startling in the longer context of the Cold War: the proportion of federal R&D coming from defense in 1987 was higher than it had been at any point since 1962. The Great Society’s promise to use federally funded scientific research to conquer disease, pollution, and poverty had gone unfulfilled.

As a consequence of both the Mansfield Amendment and Project Hindsight, DOD dollars were increasingly concentrated at industrial contractors and nonprofit research centers held at arm’s length from universities. For example, the Charles Stark Draper Instrumentation Laboratory, Inc., now operating as an independent entity, consistently ranked first among nonprofit federal R&D contractors throughout the 1980s. Meanwhile, fewer American corporations depended on federal contracts for the bulk of their profits. Having suffered from their reliance on single-source funders during the defense cuts of the late 1960s and early 1970s, such leading defense contractors as Westinghouse, GE, Honeywell, and Raytheon either diversified their products and services or left the defense field altogether. The uptick in defense spending certainly strengthened the remaining contractors’ bottom line, but these contracts represented a much smaller part of a larger portfolio. Throughout the 1980s, for example, only five of the top fifteen defense contractors obtained more than 60 percent of their revenue from contracts with NASA or the DOD. With the exception of this core of defense contractors (Grumman, General Dynamics, Northrop, Martin Marietta, and McDonnell Douglas), an increase in military spending no longer translated directly into corporate profits.

For many university administrators and corporate research managers, the military buildup was a distraction from structural changes taking place within the American economy. The American economy had all but collapsed in the 1970s, with real wages falling and productivity flat. With instability in the Middle East, oil prices soared; inflation soon followed. The target interest rate of the U.S. Federal Reserve for bank-to-bank lending, a benchmark of the overall economy, climbed steadily from its historical postwar average of below 5 percent to a peak of nearly 20 percent in 1981, making capital investments prohibitively expensive. Economists and business leaders expressed particular concern over the emergence of a permanent trade deficit: beginning in 1976, the United States has consistently imported more goods and services than it has exported. Japan, in contrast, experienced phenomenal economic growth. For the first time since the Great Depression, the challenge of economic competitiveness ranked equally with the fight against Communism in the minds of both the public and those who established federal science policy.

An increasingly powerful conservative political movement placed the blame for this economic downturn on the growth of the federal government. Neo-liberals like the University of Chicago economist Milton Friedman blamed excessive federal regulation and taxation for posing disincentives to corporate innovation. First under Carter and then more aggressively under Reagan, health, safety, and environmental regulations became subject to the same sorts of cost-benefit analyses previously reserved for federal spending. Conservative economists, moreover, argued that dramatic changes would be needed in science and technology policy to encourage inventors to bring new products to market. Even before Reagan’s election in 1980, conservative economists were recommending a shift in the role of the federal government from underwriting basic research that might someday produce breakthrough technologies (the Science: The Endless Frontier model) to encouraging the development of commercial applications from existing scientific knowledge (a concept known as technology transfer).

These recommendations soon took the form of concrete policy actions that transformed the research economy in the 1980s. The two most important of these were the 1980 Patent and Trademark Amendment Act (also known as the Bayh-Dole Act after its sponsors in the Senate) and the 1981 Economic Recovery Act. The Bayh-Dole Act allowed universities, small businesses, and nonprofits to file patents on federally funded research, a privilege rather than a right under previous law. Similarly, the Economic Recovery Act provided tax credits for businesses that invested in scientific and technical R&D. Universities began courting industrial patrons on a level not seen since before World War II, with many campuses building research parks dedicated specifically to conducting investigations on behalf of corporate sponsors. Even the NSF, the last redoubt of basic research, increased its support for applied research and encouraged the creation of public-private cooperative research centers. By the end of the decade, at least 10 percent of the NSF’s budget was dedicated to engineering, with researchers in all fields increasingly asked to identify the practical applications that might result from their work.

Critics soon charged that corporate funds were distorting research agendas, warping graduate education, and inhibiting the free exchange of information. Though few commented on it at the time, their complaints were eerily reminiscent of those from critics who had lamented the influence of military dollars on campuses in the 1950s. Yet beneath these cosmetic similarities, there lurked a crucial difference: in the early Cold War, the benefits of this system accrued to the public, in the form of supplying the needs of the federal government; in the 1980s, they accrued to private industry, in the form of corporate profits. As a means of illustrating both continuity and change, the next two sections take a closer look at scientific entrepreneurship and military R&D in the 1970s and 1980s.

The New Academic Entrepreneurship: Biotechnology

Physics may have captured the spotlight for most of the Cold War, but some of the era’s most dramatic advances came from the field of biology. James Watson and Francis Crick’s 1953 discovery of the structure of DNA was only the most famous of a series of discoveries that had, collectively, transformed biology from a descriptive science to a powerhouse of experimental method. When researchers at Stanford University and the University of California–San Francisco announced in 1973 that they had successfully inserted a specific sequence of DNA into a target organism, the stage was set for a biological gold rush. Within a few short years, biotechnology became the darling of the stock market; academic biology would be forever transformed.

As pioneered by UCSF’s Herbert Boyer, Stanford’s Stanley Cohen, and their colleagues, recombinant DNA technology functions by tricking bacteria into treating a section of spliced DNA containing a gene sequence from a higher organism as their own. The technique’s commercial promise rested on the prolific reproductive capacity of Escherichia coli, the most common target organism. Assuming that researchers could learn how to control the system, E. coli could be used as a sort of bacterial factory to produce biological substances on demand. Instead of treating diabetes with insulin derived from slaughtered pigs, for example, it might be possible to produce commercial quantities of human insulin using recombinant DNA. Nobel laureates and science reporters alike predicted astonishing uses and dramatic applications, from manufacturing chemicals and human “spare parts” to creating novel biological weapons. Not coincidentally, commentators took to calling the technique “genetic engineering.”

Still, commercial applications for genetically engineered organisms were a ways off in 1973 when Stanford and the University of California applied for a patent to cover the technique. The research had been funded with a grant from the NIH, which had recently implemented intellectual property agreements with sixty-five universities (including Stanford and California) in an attempt to encourage the commercial development of biomedical technologies. Stanford, meanwhile, was at the forefront of the movement to use revenues from patents to offset losses in defense funding. In 1970, it had established an Office of Technology Licensing to encourage faculty members to disclose inventions, file patents, and market licenses. Though the envy of university administrators, Stanford’s aggressive patent policy was not without its critics, particularly within the field of molecular biology. Aside from the question of federal funding, scientists complained that Cohen and Boyer’s achievement was not enough of an advance over existing techniques to merit the granting of a patent.

After a six-year legal battle, the U.S. Patent Office issued Patent No. 4,237,224 to Stanford in 1980. Within two weeks of the granting of the patent, Stanford had licensed the technology to seventy-two companies for annual payments of $10,000, plus royalty fees of up to 1 percent of sales on products using recombinant DNA technology; it soon had the highest patent income of any university in the country. By 1984, the income from this single patent exceeded more than $750,000, with lifetime earnings expected to yield somewhere between $250 and $750 million. The earnings were so high because the scope of the patent was so vast: theoretically, anyone using recombinant techniques without a license, regardless of the specific biological substances involved, could be challenged with patent infringement.

Even during the period of uncertainty for the Boyer-Cohen patent, both corporate and academic entrepreneurs had begun to reconsider the role of intellectual property in an investment portfolio. Before the 1970s, most universities refrained from seeking patents on discoveries related to public health, under the theory that universities had a public service obligation to share the results of their research with the public. A harbinger of change was an agreement in 1974 between Harvard Medical School and the Monsanto Corporation. For a fee of $23 million, spread over twelve years, Monsanto would receive exclusive license to patents generated by antitumor research funded by the corporation at Harvard. Although this was hardly the first time that a major corporation had partnered with university researchers, the scale of the grant, and its focus on what the scientists involved insisted was “basic” research, was unprecedented.

Monsanto’s investment in Harvard was based on the idea that a small upfront investment in scientific research might eventually yield tremendous long-term income in the form of patents. In contrast to the postwar model, the idea was not that basic research would inevitably lead to useful applications that could then be bought and sold; rather, it was that basic research itself could become a commodity through the patent process. Knowledge could be transformed into “intellectual property.”

Over the 1970s and 1980s, the commercialization of biotechnology proceeded along several main paths, all starting with this notion of intellectual property. The most familiar route followed the model of the Harvard-Monsanto partnership: multinational pharmaceutical companies partnered with academic researchers and small firms, both as a means to secure patent portfolios and to keep an eye on new developments that might affect their existing product lines. The other routes to commercialization were more novel.

Beginning in the early 1970s, a number of academic researchers, including Herbert Boyer, formed start-up companies while maintaining their university affiliations. While some of these researchers retained ownership in hopes of bringing a product to market, a more typical strategy was to partner with investors from venture capital, a new form of high-risk, high-return investment then popular in California’s Silicon Valley. Venture capital made sense only in the context of a science and technology policy that prioritized economic growth; tax rates on capital gains fell from 48 percent to 28 percent from 1968 to 1978. Because the business model relied on the promise of future investors recognizing the market value of a company’s research portfolio—including, of course, its patents—a start-up powered by venture capital might be considered successful even if it never brought a product to market. By 1980, venture capital and multinational corporations had invested more than $42 million in a handful of genetic engineering firms, most of which were founded by academic scientists or included them on their board.

Boyer’s company, Genentech, pioneered this approach. When Boyer filed for incorporation, he did so with the assistance of Robert Swanson, an unemployed venture capitalist formerly associated with the firm of Kleiner & Perkins. Swanson had already received a promise of seed money in the form of $100,000 from his former employers, for which the company would receive 20,000 shares of Genentech stock. As founders, both Boyer and Swanson received an initial 25,000 shares. In June 1978, Genentech signed a contract with Eli Lilly to the tune of $50,000 a month for the biotech firm’s attempts to synthesize human insulin. Lilly, it should be noted, was simultaneously subsidizing similar research at one of Genentech’s competitors, all in hopes of protecting its existing market share in treating diabetes. Genentech’s stock market debut in October 1980 defied all expectations, with shares jumping from $35 to $80 within minutes of the opening bell. By the time the market finally closed, each of the 1.1 million shares was worth $71, and, at least on paper, Genentech was worth $532 million. For a company without products or revenue, this was a remarkable achievement.

Eventually, Genentech did manage to bring products to market, as did its competitors. The stock market bubble for biotech also eventually burst, as most bubbles do. Even so, the commercialization of biotech set a pattern for a series of research-driven industries in the 1980s and 1990s. University faculty members in science and engineering departments have become well versed in the ways of patents, articles of incorporation, and exit strategies as investors have jumped on the technology bandwagon. And while some faculty members are deeply uncomfortable with what they consider to be the commercialization of the academy, quite a few others have gotten rich. Those fields with little conceivable relationship to the market, in contrast, have suffered budget cuts and attacks on the need for their existence. With the end of the Cold War imperative, commercial value has come to supplant national defense as the defining characteristic of academic science.

By MSW
Forschungsmitarbeiter Mitch Williamson is a technical writer with an interest in military and naval affairs. He has published articles in Cross & Cockade International and Wartime magazines. He was research associate for the Bio-history Cross in the Sky, a book about Charles ‘Moth’ Eaton’s career, in collaboration with the flier’s son, Dr Charles S. Eaton. He also assisted in picture research for John Burton’s Fortnight of Infamy. Mitch is now publishing on the WWW various specialist websites combined with custom website design work. He enjoys working and supporting his local C3 Church. “Curate and Compile“
Leave a comment

Leave a Reply Cancel reply

Exit mobile version