How ChatGPT has led to a debate about AI regulation and bans in Europe
Context- Garante, the Italian data protection authority, apparently jumped the gun at the end of March when it imposed a temporary ban on Chat GPT, a chatbot that uses artificial intelligence (AI) to generate texts that seem as if they were created by humans, and computer games. The watchdog was less concerned by the use of AI — the simulation of human intelligence by computer systems — than by breaches of data protection legislation.

(Credits- Softweb Solutions)
Garante then told the Microsoft Corp-backed company behind ChatGPT, OpenAI, that it would have to be more transparent with its users about how their data were processed. It also said that the US company had to obtain permission from users if their data were to be used to further develop the software.
EU-wide regulation of AI
- Spain and France have also raised similar concerns about Chat GPT. For the moment, there is no EU-wide regulation of the use of AI in products such as self-driving cars, medical technology, or surveillance systems.
- The European Parliament is still debating legislation proposed by the European Commission two years ago. When it is approved, the EU member states themselves will have to agree, and so it will probably be early 2025 before it comes into force.
- It is not clear whether ChatGPT or a similar product would even be covered by the EU regulation, which defines levels of risk in AI that run from “unacceptable” to “minimal or no risk.”
- As the legislation stands, only programs assigned scores of “high risk” or “limited risk” will be subject to special rules regarding the documentation of algorithms, transparency and the disclosure of data use. Applications that document and evaluate people’s social behavior to predict certain actions will be banned, as will social scoring by governments and certain facial recognition technologies.
- Legislators are still discussing to what extent AI should be allowed to record or simulate emotions, as well as how to assign categories of risk.
- The EU members’ data protection commissioners want AI to be monitored by an independent body. Hence, need to amend the existing data protection legislation.
Striking a balance between consumer protection and economy
- The European Commission and Parliament are trying to strike a balance between consumer protection, regulation and the free development of the economy and research. After all, as the EU Commissioner for the Internal Market Thierry Breton has pointed out, AI offers “immense potential” in a digital society and economy.
- Two years ago, when the bloc’s AI legislation was presented, the EU did not want to drive the developers of AI away but promote them and persuade them to settle in Europe.
- Mark Brakel from the US-based nonprofit Future of Life Institute told DW that companies also had to be held accountable by regulators. He said that it did not suffice to apply risk levels to AI applications.
- He suggested that developers themselves should have to monitor the risks of each individual application and that measures should be taken to ensure that “companies are mandated to do this risk management and publish” the results.
- What is striking about ChatGPT, which is causing a stir in Europe, is that it was developed in the US for global use.
- OpenAI could soon face stiff competition from other US companies such as Google and Elon Musk’s Twitter. Chinese tech giants are also in the race, with Baidu already having created a chatbot called Ernie.
Conclusion- AI chatbots can do a multitude of tasks that can simplify our work. However, they sometimes provide inaccurate, misleading and dangerous results surprising even their developers. Hence a common framework to govern ethical usage of AI is needed.
Syllabus- GS-3; Science and Tech
Source- Indian Express