The Democratization of Science Ushers in a New World Order
Once the pinnacle of national achievement, space has become a trophy to be traded between two business owners. On April 8, Elon Musk’s SpaceX finally succeeded at landing its Falcon 9 rocket on a drone ship in the ocean, reinforcing its lead over Jeff Bezos’s Blue Origin, which claimed a marginal victory over the Thanksgiving weekend late last year. Both companies have passed notable milestones towards affordable spaceflight for private citizens. But perhaps most remarkable is that we’re talking about two private companies at a time when most still view space exploration as the territory of governments. In many ways, our narrative of space is still dominated by our memory of the space race set in motion by the Soviet launch of Sputnik 1 on October 4, 1957 — an event that shaped government investment and global power for the past six decades. That era is over.
While we could certainly carry the “space race” analogy too far, one trend is clear: Private groups and even private citizens are achieving advances in many domains that were once the exclusive dominion of the world’s most powerful nations.
One field in which this trend is clear is biotechnology. A movement known to some as do-it-yourself bio (DIYbio) has given the average citizen the ability to experiment with the fundamentals of life on this planet. These “life hackers” benefit from the online availability of new and used lab equipment, as well as community wet labs in which groups pool resources to access tools that otherwise would prove cost-prohibitive. This democratization of science has many benefits, including greater diversity of ideas and cross-pollination of expertise, often in the spirit of conducting scientific experimentation just for the fun of it. Indeed, the basic science community is founded in a culture of curiosity and experimentation; the DIY movement builds on this foundation by providing an infrastructure — including equipment, processes and protocols, and an open exchange of ideas — to enable people outside of formal institutions access to the resources those institutions typically provide.
However, it is the very culture of openness and sharing that has caused some concern. When scientists in the United States and the Netherlands succeeded in mutating the H5N1 bird flu virus into a form that could spread among mammals, a group of scientists convened by the World Health Organization opted to censor the publication of the experiment’s findings, removing key details for fear that amateurs or violent extremists would seek to reproduce the results. Indeed, this fear of global devastation being caused by a man-made virus has been the subject of popular movies for some time — like the 1995 hit Twelve Monkeys.
But concerns about the malicious use of biotech capabilities are not confined to Hollywood. At the FBI’s request in 2012, the American Association for the Advancement of Science convened a meeting to review and establish safeguards to protect against the misuse of scientific knowledge. While the meeting succeeded in establishing norms, it focused on research being conducted in formal institutions. The safeguards do not and cannot control amateur science.
Scientists in well-established institutions are debating how to handle recent breakthroughs. In November 2015, biologists at the University of California, Irvine used a gene-editing technology known as “CRISPR” to engineer a “selfish” gene trait in mosquitoes that demonstrated the ability to stop the malaria parasite from growing. CRISPR was coined in 2002 to describe a new technique to quickly, easily, and reliably modify DNA in plants and animals — including in humans. It has been so successful at simplifying the process that DIY biologists lacking formal training are able to use it — people such as Johan Sosa, an IT consultant who took up biohacking as a hobby and has used CRISPR to modify DNA in plants. Although he was initially drawn to DIY biology in hopes of growing organs in a lab, he quickly learned that such feats are unrealistic for self-taught biohackers. Nonetheless, government officials are sufficiently concerned with the capability that earlier this year the director of national intelligence proclaimed gene editing to be a weapon of mass destruction in his office’s annual assessment of global threats — CRISPR was a key driver in the decision, with the report citing its “broad distribution, low cost, and accelerated pace of development.”
Although the community had seen this technology coming for more than a decade, guidance related to its use is still unclear. While some scientists feel the potential malaria-eradicating mosquitoes engineered at UC Irvine should not be released without broad public support, the biologists who created the breakthrough hope to release the mosquitoes as soon as they find a malaria-affected community willing to accept the engineered bugs. With the recent proliferation of the Zika virus, scientists are considering using this same new technology, called a gene drive, which was first demonstrated just last year and is designed to force genetic change to spread through reproduction.
In December 2015, the National Academies held an international summit on human gene editing, concluding with a panel discussion on governance, regulation and control. Opening with the observation that much of this activity is “invisible to the regulatory system,” the panel highlighted several disconcerting facts at the intersection between CRISPR and the DIYbio movement. Barriers to entry are falling, elevating the capabilities of individuals enough to match those of large organizations of the not-too-distant past. Robotic equipment can be purchased online to conduct complex and hazardous experiments. Alternatively, remote labs will run your experiments for you, providing documentation of their results. Indeed, this approach was applied in 2014 in a cancer study in which CRISPR was used to create tumors in mice with a 100-percent success rate. While the objectives of this study were noble, they need not have been — the lab to which this experiment was outsourced guarantees the confidentiality of any transaction. The conclusion: Regulation isn’t feasible. Instead, the most anyone can do is promote awareness, establish norms, and leave it to the community to self-police — something of a “neighborhood watch” model — which has essentially been the FBI’s approach to date.
The tension between technology and governance is visible beyond the complex and unique domains of space and the life sciences. In the finance industry, technological developments are also driving change at a pace that exceeds the agility of government regulation. Created in 2008, Bitcoin quickly spread across the globe as an alternative digital currency specifically designed to circumvent established financial institutions. Regulators in the United States and elsewhere are still debating how to address digital currency. In a race to eliminate the time lag between when a trader initiates a trade and when the system records it, the industry has moved from fiber cables to microwave/millimeter wave technology to lasers. While some are calling for government regulators to step in, it is unclear whether regulators could slow or halt these changes.
The ability of the government to regulate is trailing technology in other areas as well. Last year, unmanned aerial vehicles notably began to interfere with public safety and military operations. In California, firefighting aircraft were grounded in order to avoid a midair collision with drones piloted by citizen reporters and voyeurs snapping footage of the blaze, leading to delays in putting out the fires. In 2015, there were at least 35 recorded incidents of drones piloted by private citizens interfering with military operations, with several near-misses that came within inches of causing a midair collision. While 2015 ended with a new Federal Aviation Administration requirement for people to register drones, it’s hard to see how this provision does anything other than keep honest people honest — if it even achieves that.
Commercial companies seeking advancements in drone technology to aid their business interests have been driven overseas for testing. According to one U.S. government report, Amazon is testing its package delivery drone in Canada, while Google has conducted similar testing of drone delivery in Australia. In addition to being ahead of the United States in finalizing regulations, these countries have further attracted U.S. businesses by developing much less restrictive rules than those proposed or anticipated by U.S. authorities.
For military drone technology, the United States is still in the lead. However, in an ongoing clash between the White House and Congress, the United States has so far refused to sell its drone technology to Jordan, which requested the aircraft to aid in its fight against the Islamic State in Iraq and the Levant (ISIL). The White House made this denial based on an international agreement, the intent of which is to inhibit the proliferation of advanced military technology. Yet it is unclear whether this refusal will actually achieve arms control objectives, since senior Chinese officials are courting Jordan for the purchase of their Caihong 5 drone.
The ineffectiveness of current regulation in the face of the rapid acceleration and democratization of science and technology has led some groups to take matters into their own hands. Last December, two announcements unveiled parallel efforts in the United States and the United Kingdom. In the United States, several technology leaders, including Elon Musk and Peter Thiel, united to invest $1 billion in the creation of a non-profit called OpenAI, whose inaugural blog positioned it as protector of public interest. In London, a university ethics professor launched the Foundation for Responsible Robotics, citing direct concerns that a gap in governance will lead to “governments and companies [pushing] out robots without fully understanding the technology.”
But are private organizations best positioned to identify and defend the common good? With the success of Anonymous in attacking ISIL’s social media presence, some have argued that private citizens can “act across more borders, more effectively and more rapidly than governments can.” Indeed, governments have sometimes covertly employed non-state groups for this very reason. However, is the speed and democratization of technology moving us more broadly into a world wherein vigilantes are more effective than states in governing? To the extent that this may be the case, we must soberly consider the risks of such developments.
Silicon Valley companies are already talking about a mass exit from the reach of U.S. officials, possibly following the bitcoin model in using technology to intentionally circumvent government control. With a rising population of individuals around the globe becoming empowered with the knowledge and resources once reserved for large formal institutions, a growing cadre of global scientific leaders are warning of impending disaster if significant changes aren’t made within both governments and the scientific community. Recently, the Bulletin of Atomic Scientists advanced the minute hand on the “doomsday clock” to 3 minutes ‘til midnight, signifying a consensus among scientists that recent trends are threatening catastrophic impact to our way of life. The clock, in existence since 1947, has only been closer to the apocalyptic “midnight” once, in 1953 following back-to-back hydrogen bomb tests by the United States and Soviet Union. If governments don’t begin considering technology trends as a fundamental part of national security policy-making, we must be prepared for our current governing bodies to become irrelevant as the hand on the doomsday clock advances.
Christopher Zember is Director of the Center for Technology and National Security Policy, a Department of Defense research center at the National Defense University. Views expressed here are solely his own, and do not necessarily reflect those of the National Defense University, the Department of Defense, or the U.S. government.