US government forms new AI Safety Board, keeps Elon Musk, Mark Zuckerberg, out of it
- by Firstpost
- May 01, 2024
- 0 Comments
- 0 Likes Flag 0 Of 10
Whatsapp Facebook Twitter
In the newly formed Artificial Intelligence Safety and Security Board, the Joe Biden-led US government left Tesla and SpaceX CEO Elon Musk, and Meta CEO Mark Zuckerberg out of the board were
Advertisement
Tesla CEO Elon Musk, US President Joe Biden, Meta CEO Mark Zuckerberg. Image Credit: Reuters, Reuters, AFP
The Joe Biden-led administration created a new AI safety board earlier this week, which included some of the biggest names from the tech industry.
Created in response to the escalating threat posed by deepfake technology, the Artificial Intelligence Safety and Security Board, included notable names such as OpenAI’s Sam Altman, NVIDIA’s Jensen Huang, Microsoft’s Satya Nadella, Alphabet’s Sundar Pichai, Adobe’s Shantanu Narayen, AMD’s Lisa Su, etc.
Also part of the elite group were a few government officials, including some from the White House, state governors, defence contractors and human rights bodies.
Advertisement
Two people who were conspicuously left out of the board were Tesla and SpaceX CEO Elon Musk, and Meta CEO Mark Zuckerberg.
The move comes in the wake of numerous incidents involving the malicious use of deepfakes targeting individuals ranging from politicians to celebrities, and even minors.
Instances of using “nudification” programs and GenAI to create deepfakes that are then used to blackmail or harass people, especially women have become increasingly prevalent, particularly within American educational institutions, as highlighted by The New York Times.
The formation of a federal board comprising industry heavyweights represents a crucial step forward in addressing these pressing concerns. However, controversies have already arisen regarding the composition of the board, because of the absence of Zuckerberg and Musk.
Speculations are aplenty as to why Zuckerberg and Musk were omitted from the roster of board members released by the Department of Homeland Security (DHS).
While the Secretary of Homeland Security, Alejandro Mayorkas cited the exemption of social media platforms as the reason for their exclusion, scepticism persists among many observers.
Meta, formerly Facebook, has faced scrutiny, including a pending EU probe for purportedly failing to curb Russian disinformation on its platform.
Additionally, concerns have been raised about inadequate measures to combat ads promoting nudification apps.
Similarly, Media Matters’ report exposing ads alongside antisemitic content led to major advertisers withdrawing from the platform, prompting Musk to file a lawsuit against Meta.
Advertisement
Such controversies have cast doubt on the commitment of Meta and Tesla to AI safety initiatives, particularly given the board’s mandate to advise the DHS and other stakeholders on potential AI disruptions.
Zuckerberg has advocated for open-source AI, which presents unique challenges in terms of regulation and safety. Meanwhile, concerns have been raised about Musk’s unpredictability, compounded by his ongoing legal disputes with the Securities and Exchange Commission (SEC).
Despite these controversies, companies involved in the safety board have demonstrated a greater willingness to engage in discussions on AI safety. For instance, during a Senate hearing, Altman emphasised the importance of designing safe AI products.
Advertisement
Moreover, these companies have implemented their own AI safety protocols, albeit with varying degrees of success. OpenAI, for instance, employs reinforcement learning with human feedback to improve its model’s behaviour, while other firms have developed their own safety frameworks.
Efforts to mitigate the spread of deepfakes include the adoption of watermarking techniques by companies like Adobe and Google, as well as the proposal of a CSAM database to train AI models in detecting potentially explicit content.
While these measures represent positive steps toward addressing the deepfake menace, the complex nature of the issue necessitates continued collaboration between independent researchers and industry stakeholders to effectively safeguard against AI-driven threats.
Advertisement
Please first to comment
Related Post
Tweets by elonmusk
To get the latest tweets please make sure you are logged in on X on this browser.