The final vote on the artificial intelligence law is scheduled to take place on Wednesday after a three-year deadline. There is hope in Strasbourg that the law will find imitators around the world. A period of change begins for companies.
it's time. After months of negotiations, votes and amendments, the final decision on the artificial intelligence law is scheduled to be made on Wednesday. This Wednesday should be at lunchtime. Under this law, AI systems are divided into categories: the greater the potential risks, the greater the requirements. The law stipulates many requirements for developers, companies and providers of these technologies. But the AI law also faces resistance. Algorithm Watch criticizes the existence of “glaring gaps”, especially in the field of mass surveillance.
AI offers entirely new opportunities for businesses, but it also brings with it many unknown variables. This is exactly what the European Union is dealing with. Even before Chat GPT made the topic relevant to the masses, a legal framework was put in place. It was agreed to divide into risk groups. The higher the risk of the application, the more requirements that must be met. This aims to ensure that systems operate transparently and comprehensively. There is an important common point between all groups: AI must be monitored and controlled by humans.
There are four different categories:
1. Unacceptable risks – This includes real-time biometric recognition (such as in China) and the use of AI systems in the workplace to read and monitor employees' emotions.
2. High risk – Use in education and law enforcement – only under certain security conditions, such as conducting a risk assessment related to fundamental rights
3. Limited risks – Generative AI such as Chat GPT, Gemini and Galaxy AI – with the requirement to also tag content as such. Watermark keyword
4. Low risk – Artificial intelligence systems for spam filters
Exceptions: The law excludes from its scope of application the following:
1. Artificial intelligence models or systems used exclusively for scientific research purposes
2. The use of artificial intelligence systems in purely domestic activities
3. Artificial intelligence systems used exclusively for defense or military purposes
Why the strict requirements? Chat GPT (OpenAI), Gemini (Google), and all the others are already changing how we perceive content. Fake news, disinformation campaigns and manipulated images are already difficult to recognize. But generative AI, as it's called, in which humans provide specifications via text and AI fills them in using massive databases, is only a small part of what AI can be used for. In medicine, research, education and law enforcement. But the latter in particular needs strict rules, as officials at Algorithm Watch demand.
Surveillance such as that which could be implemented in China should not set an example for European countries. However, the law has “glaring loopholes.” That's why Germany is calling for a “ban on biometric identification in public places.” Although the ban is certainly enshrined in the AI Act, the “variety of exceptions” is worrying: “Implementing real-time biometric remote identification in public places opens the door to miserable conditions where every person can be permanently identified.” With everyone.” Movement in public spaces becomes subject to surveillance.
We hope to have an international example
In the future, AI law will apply to everyone who develops or displays and uses AI systems in the EU; Regardless of whether the companies are public or private. But the European Union hopes that the Artificial Intelligence Law will find international acceptance and serve as a model to be emulated.
This means that MPs will agree to the settlement that has already been negotiated by negotiators from Parliament and EU countries. This is based on a proposal from the European Union Commission from 2021. AI typically refers to applications that rely on machine learning, where programs sift through large amounts of data for matches and draw conclusions from them. They are already used in many fields. For example, such programs can evaluate images from computer tomography scans faster and more accurately than humans.
What then?
If the AI law is approved in the European Union Parliament in Strasbourg today, it will enter into force 20 days after its publication in the journal. EU member states then have six months to adopt the requirements – particularly for Categories 1 and 2 AI. The Artificial Intelligence Law comes into force two years after it enters into force; Including penalties that may be imposed in the event of non-compliance. It took nearly three years from proposal to final vote. (to forbid)
“Food practitioner. Bacon guru. Infuriatingly humble zombie enthusiast. Total student.”
More Stories
At least 95 dead in Spain: thousands of people trapped in cars, trains and shopping centres
Will Biden become a burden on Harris in the US election campaign?
Spain: More than 60 killed in the storms