Trump’s potential impact on emerging and disruptive technologies
- by Bulletin of the Atomic Scientists
- Nov 06, 2024
- 0 Comments
- 0 Likes Flag 0 Of 5
| November 6, 2024
Tesla CEO Elon Musk with Donald Trump during a campaign rally on October 5. Trump has suggested Musk could lead a government efficiency commission under his administration. Credit: Jim Watson/AFP/Getty Images/TNS/Alamy Live News
Share State-level regulatory initiatives and voluntary company standards may ultimately become more important for addressing near-term harms from AI depending on whether the administration chooses to set guardrails for the private sector (both the first Trump administration and Biden administrations largely pursued voluntary measures with leading AI firms). Personnel will shape policy: Prioritizing addressing long-term risks from foundational models could come at the expense of shorter-term challenges like bias or disinformation. The major question will be whether Trump 2.0 builds on or disrupts existing AI governance progress.
– Owen Daniels, Associate Director of Analysis & Andrew W. Marshall Fellow, Georgetown Center for Security and Emerging Technology (CSET)
Military drones
I would expect the next US president to pay attention to maintaining, and further developing, the technological edge of military drones. I would therefore anticipate an interest in expanding drone capabilities across all operating domains. This is especially likely to be the case regarding maritime drones, which seem to promise versatile applications. The new US leadership is also likely to be enthusiastic about further experimentation with artificial intelligence to enhance the autonomy of uncrewed vehicles. I would be surprised if, under the new administration, drones did not become agents of algorithmic warfare. And I would expect this without US endorsements of much international regulation. However, major challenges lie in wait to counter small drones. The United States Armed Forces will need to stay ahead of adversaries and improve the protection of American military bases at home and abroad against drone threats––both those that spy and those that kill. At the same time, the pressure to keep the cost of new drone capabilities low should force the new president to create more Replicator-like projects. This will also require navigating the intricacies of the nascent new defense technology industry and adjusting procurement process for software-heavy innovations in drone technologies.
–Dominika Kunertova, research scientist in international security and emerging technologies
Light touch on tech policy and regulation
I think we have every reason to expect that a Trump administration is going to have a light touch in terms of tech policy and regulation. He’s received support from tech accelerationists, has a vice president who was a Silicon Valley venture capitalist, and is being advised by supporters like Elon Musk who have expressed concerns about the onerousness of regulations on innovation. Those supporters are motivated by backlash against a Biden administration that they viewed as skeptical toward tech, whether through FTC Chair Lina Khan’s reach, that of the SEC Chairman Gary Gensler on cryptocurrency, or the Biden Executive Order on Artificial Intelligence. I would suspect Trump will appoint more tech-friendly individuals in those roles and has said he would repeal the Biden executive order. However, I think we have reason to think he will keep the export controls on semiconductor chips that are the center of the tech and geopolitical competition with China and will continue to use tariffs—like in his first term and in the Biden administration—to encourage domestic manufacturing including in the tech space. Overall, though, I would predict greater friendliness toward the tech sector, manifested as fewer regulations and fewer anti-trust cases against the tech industry.
–Sarah Kreps, the John L. Wetherill Professor in the Department of Government, adjunct professor of law, and the director of Tech Policy Institute at Cornell University.
An all-hazard approach to AI
The future of artificial intelligence will crystalize over the next four years, coinciding with a second Trump administration. It could be a period marked by more powerful AI models and a geopolitical and corporate race to develop the most advanced AI. These capabilities could be increasingly incorporated within weapons systems, critical infrastructure, and broader society. This period could also be marked by barriers to continued unfettered progress. Onerous energy requirements, talent shortages, semiconductor constraints, and unforeseen limitations on algorithmic improvements might hold developers back. Perhaps it will be some combination of the two. Regardless, choices made by President Trump and his team will shape this path.
The Trump Administration will need to face the national and economic security threats posed and heightened by AI. Increasingly powerful AI systems could worsen proliferation of chemical and biological weapons, disrupt already-weak nuclear stability arrangements, feed into a hyperactive and muddied information ecosystem, and disempower the very working class that Trump seeks to protect. These challenges are not partisan issues. And the need for the national security community to manage them will not dissipate with a change in Administration.
The prior Trump Administration had the foresight to establish a volume of AI policy before artificial intelligence garnered significant attention following the release of ChatGPT and other models. For example, in one White House memorandum, agencies were encouraged to “be mindful of any potential safety and security risks and vulnerabilities, as well as the risk of possible malicious deployment and use of AI applications” and “consider, where relevant, any national security implications raised by the unique characteristics of AI and AI applications and take actions to protect national security.” The Trump Administration also agreed to the OECD AI Principles, including that “AI systems should be robust, secure and safe throughout their entire lifecycle.” A continuation of these policies as a baseline to address AI safety and security should be expected.
If the next Administration revokes or revises other existing executive orders on AI, the risk of AI development will not be dismissed. In many ways, agencies were building on the original Trump policies. For example, the Department of Homeland Security released a report on the intersection of AI with chemical, biological, radiological, and nuclear (CBRN) threats. It found that “As AI technologies advance, the lower barriers to entry for all actors across the sophistication spectrum may create novel risks to the homeland from malign actors’ enhanced ability to conceptualize and conduct CBRN [Chemical, Biological, Radiological, and Nuclear] attacks.” These concerns have been raised across the political spectrum, and Department of Homeland Security can capitalize on this work to institute other measures that reduce AI risk.
The future of AI governance will also include policies that are, perhaps counterintuitively, not specific to artificial intelligence The laws and policies that will be needed for reducing the risk of an AI-related catastrophe are also approaches that are relevant to all hazards—many of which require their own upgrading and reform given the various global threats we face. These could include crisis planning, resilience of critical infrastructure, and emergency management. For example, the Federal Emergency Management Agency develops Federal Interagency Operational Plans (FIOPs). They lay out the roles and responsibilities, coordination mechanisms, and guidance for responding to a range of crisis scenarios. The Global Catastrophic Risk Management Act in the United States requires these Federal Interagency Operational Plans to be updated to better consider global catastrophic risk, including from AI. The expectation should be that these efforts will continue in the next Trump Administration, as exemplified by the valuable leadership displayed in reforming core all-hazard policies related to continuity of operations and continuity of government policy in the final months of his last term.
–Rumtin Sepasspour, cofounder and director of Policy of Global Shield, an international organization advocating for policy action on reducing global catastrophic risk
Together, we make the world safer.
The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.
Make your gift now
Please first to comment
Related Post
SpaceX to launch India’s GSAT satellite
- Nov 17, 2024
Stay Connected
Tweets by elonmuskTo get the latest tweets please make sure you are logged in on X on this browser.
Sponsored
Popular Post
Tesla: Buy This Dip, Energy Growth And Margin Recovery Are Vastly Underappreciated
28 ViewsJul 29 ,2024