EU prepares to impose rules on artificial intelligence
The European Union (EU) is trying to determine the scope of the law that will impose strict rules on various artificial intelligence technologies such as ChatGPT, the chatbot whose area of use has recently become widespread.
With the widespread use of artificial intelligence in different fields such as manufacturing, healthcare, finance, education, transportation, and security today, it is of great importance to clearly define the rules for this technology.
The EU does not yet have a regulation on ChatGPT or similar artificial intelligence systems, but 2 years ago, the EU Commission prepared the first legislative proposal containing the framework for new rules on artificial intelligence and presented it to member states and the European Parliament (EP). This proposal introduced some limitations and transparency rules in the use of artificial intelligence systems. If this proposal becomes law, artificial intelligence systems like ChatGPT will also have to be used in accordance with these rules.
RISK-BASED APPROACH
The new rules for artificial intelligence, which are expected to be applied in the same way in all member countries, bring a risk-based approach. In the commission proposal, artificial intelligence systems are divided into 4 main groups as unacceptable risk, high risk, limited risk and minimal risk.
Artificial intelligence systems, which are considered a clear threat to the safety of life, livelihoods and the rights of people, are in the unacceptable risk group. It is expected that the use of systems in these areas will be prohibited.
Artificial intelligence systems or applications that go against the free will of people, manipulate human behavior or perform social scores are also prohibited.
Critical infrastructure, education, surgery, CV assessment in the recruitment process, credit scoring, testing, immigration, asylum and border management, travel document verification, biometric identification systems, judicial and democratic processes are in the high risk group. Strict requirements are placed on high-risk areas, while strict requirements are placed on AI systems in this group before they are released on the market. These systems must be non-discriminatory, the results must be observable and subject to adequate human supervision.
THE FINE IS ON THE AGENDA
The proposal includes fines of up to 30 million euros, or 6 percent of its global profits, for violators of the AI law.
Work on the artificial intelligence law, which requires the approval of the EP and member states to enter into force, is still ongoing. In this area, the EU member states adopted a common position at the end of last year. (AA)