VIEWS ON JOE BIDEN’S EXECUTIVE ORDER ON AI SAFETY

Nov 1, 2023 - 16:26
 15
VIEWS ON JOE BIDEN’S EXECUTIVE ORDER ON AI SAFETY
Bhubaneswar(01/11/2023): The Biden Administration has consulted widely on AI governance frameworks over the past several months, before issuing an executive order, including with India, especially given India’s leadership as Chair of the Global Partnership on AI.
This Order will put pressure on all nations, including India, to come up with their own AI safety framework and regulations.
India does prefer to participate in a global framework for such technologies, which are inherently global.
India has the opportunity to take a leading position in AI safety and privacy, especially from a Global South and Eastern viewpoint, and should grasp this.
One option will be to expand the Data Protection Bill and legislation to include AI and Generative AI safety principles and laws. Jaspreet Bindra, Managing Director & Founder of Tech Whisperer Limited, UK has shared his views on the same.
Perhaps it was Joe Biden who watched ‘himself’, or rather his deep fake which prompted him to push the AI regulation ‘nuclear button’.
“I’ve watched one of me,” Mr.Biden said in NYT, where he referred to an experimental deep fake of himself that his people showed him that could create a very convincing ‘presidential statement’ “I said, ‘When the hell did I say that?’” Just ahead of the AI Safety Summit in London convened by Rishi Sunak and to be attended by Kamala Harris and others, the US has dropped a relative bombshell by bringing out some very interesting AI regulation.
Here are seven big take outs: 1.
BigTech and other big foundational model developers like OpenAI, Google, Microsoft etc.must divulge the results of the ‘red teaming’ safety tests that they must do on each new model, before it is released to the public.
This is a BIG one, almost like drug test results that pharma companies do with the FDA. 2.
These results will be vetted to a high bar on standards by the National Institute of Standards and Technology (NIST).
This brings it on par with chemical, biological & nuclear infra testing 3.
AI-generated content must be watermarked: so as to clearly mark AI-generated content.
This was expected and very welcome for everyone – educators, social media, lawyers, etc.
– and should help discern big fakes (like the one that Biden fell for) 4.
Pressurise Congress to pass "bipartisan data privacy legislation" Privacy is at the heart of AI and data usage, and this might focus on children’s privacy.
The EU and UK are pursuing this; India already has declared privacy a fundamental right. 5.
Scrutinise company’s data policies: Another welcome step which will evaluate how data brokers and agencies collect and use ‘commercially available’ datasets. 6.
Try to push down AI-based bias and discrimination: this is a big one and difficult to do.
The justice system is one example where algorithms tend to favour certain races over others in sentencing people.
Another welcome initiative. 7.
Help for support workers who could be affected by AI: this seems political and is designed to protect job loss for workers whose jobs will be taken over by AI.
This might appeal to the political constituency ahead of the 2024 elections.
Skilling them might be a better alternative; as AI will not take most jobs, but people using AI will. There are other steps like focusing on the safety of AI on science and biology related products, focusing on cybersecurity threatened by AI, attracting top AI talent etc. In sum, this is a welcome announcement from a country which is usually not front and centre on regulation and guardrails; that is the preserve of the EU and UK.
Lawyers for tech companies will find loopholes, which might get plugged later.
But it sets up the stage for the UK AI Safety Summit very nicely.