AI Summit: Why new global pact on tackling Artificial Intelligence risks is a big deal
Context- Bletchley Park in Buckinghamshire near London was once the top-secret base of the codebreakers who cracked the German ‘Enigma Code’ that hastened the end of World War II. This symbolism was evidently a reason why it was chosen to host the world’s first ever Artificial Intelligence (AI) Safety Summit.
(Credits- The Motley Fool)
The two-day November 1-2 summit that has drawn in global leaders, computer scientists, and tech executives began with a bang, with a pioneering agreement wrapped up on the first day, which resolved to establish “a shared understanding of the opportunities and risks posed by frontier AI”. Twenty-eight major countries including the United States, China, Japan, the United Kingdom, France, and India, and the European Union agreed to sign on a declaration saying global action is needed to tackle the potential risks of AI.
The Bletchley Park Declaration
- “Frontier AI” is defined as highly capable foundation generative AI models that could possess dangerous capabilities that can pose severe risks to public safety.
- The declaration, which was also endorsed by Brazil, Ireland, Kenya, Saudi Arabia, Nigeria, and the United Arab Emirates, incorporates an acknowledgment of the substantial risks from potential intentional misuse or unintended issues of control of frontier AI — especially cybersecurity, biotechnology, and disinformation risks.
- The declaration noted the “potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models”, as well as risks beyond frontier AI, including those of bias and privacy.
- These risks are “best addressed through international cooperation”, the Bletchley Park Declaration said. As part of the agreement on international collaboration on frontier AI safety, South Korea will co-host a mini virtual AI summit in the next six months, and France will host the next in-person summit within a year from now.
President Biden’s Executive Order
- The declaration has come days after US President Joe Biden issued an executive order aimed at safeguarding against threats posed by AI, and exerting oversight over safety benchmarks used by companies to evaluate generative AI bots such as ChatGPT and Google The order was seen as a vital first step taken by the Biden Administration to regulate rapidly-advancing AI technology.
- The order, issued on Monday, requires AI companies to share the results of tests of their newer products with the federal government before making the new capabilities available to consumers.
- The safety tests undertaken by developers, known as “red teaming”, are aimed at ensuring that new products do not pose a threat to users or the public at large. Following the order, the federal government is empowered to force a developer to tweak or abandon a product or initiative.
- Thus, a new rule seeks to codify the use of watermarks that alert consumers to a product enabled by AI, which could potentially limit the threat posed by content such as deepfakes. Another standard asks biotechnology firms to take appropriate precautions when using AI to create or manipulate biological material.
Different Countries, Varied Approaches
- In fact, policymakers across jurisdictions have stepped up regulatory scrutiny of generative AI tools, prompted by ChatGPT’s explosive launch. The concerns lie under three broad heads: privacy, system bias, and violation of intellectual property rights.
- The EU has taken a tough line, proposing to bring in a new AI Act that classifies artificial intelligence according to use-case scenarios, based broadly on the degree of invasiveness and risk. The UK is at the other end of the spectrum, with a decidedly “light-touch” approach that aims to foster, and not stifle, innovation in this field.
- The US approach is seen to be somewhere in between, with Monday’s executive order setting the stage for defining an AI regulation rulebook that will ostensibly build on the Blueprint for an AI Bill of Rights unveiled by the White House Office of Science and Technology Policy in October 2022.
- China has released its own set of measures to regulate AI.
- All of this comes in the wake of calls this April by tech leaders Elon Musk (of X, SpaceX, and Tesla), Apple co-founder Steve Wozniak, and more than 15,000 others for a six-month pause in AI development. Labs are in an “out-of-control race” to develop systems that no one can fully control, the tech leaders warned.
India’s Change in Stance
- Union Minister of State for IT Rajeev Chandrasekhar, who is representing India at Bletchley Park, said at the opening plenary session that the weaponisation represented by social media must be overcome, and steps should be taken to ensure AI represents safety and trust.
- India has been progressively pushing the envelope on AI regulation. On August 29, less than two weeks before the G20 Leaders Summit in New Delhi, Prime Minister Narendra Modi had called for a global framework on the expansion of “ethical” AI tools.
- This statement put a stamp of approval at the highest level on the shift in New Delhi’s position from not considering any legal intervention on regulating AI in the country to a move in the direction of actively formulating regulations based on a “risk-based, user-harm” approach.
- Part of this shift was reflected in a consultation paper floated by the apex telecommunications regulator Telecom Regulatory Authority of India (TRAI) earlier in July, which said that the Centre should set up a domestic statutory authority to regulate AI in India through the lens of a “risk-based framework”, while also calling for collaborations with international agencies and governments
- This also came amid indications that Centre was looking to draw a clear distinction between different types of online intermediaries, including AI-based platforms, and issue-specific regulations for each of these intermediaries in a fresh legislation called the Digital India Bill that is expected to replace the Information Technology Act, 2000.
- In April, the Ministry of Electronics and IT had said that it is not considering any law to regulate the AI sector, with Union IT minister Ashwini Vaishnaw admitting that though AI “had ethical concerns and associated risks”, it had proven to be an enabler of the digital and innovation ecosystem.
Way Forward- TRAI’s July recommendation on forming an international body for responsible AI was broadly in line with an approach enunciated by Sam Altman, the founder of OpenAI — the company behind ChatGPT — who had called for an international regulatory body for AI, akin to that overseeing nuclear non-proliferation.
Syllabus- GS-3; Science and Tech
Source- Indian Express