Authorities around the world are racing to develop artificial intelligence standards. Notably in the European Union, where draft legislation is set to reach a critical point on Thursday. A European Parliament committee will vote on the new rules. Which are part of a years-long effort to create guardrails for artificial intelligence.
Those efforts have become more pressing as the rapid advancement of ChatGPT shows both the advantages of technological advances and the new risks it poses.
The AI Act, which was first suggested in 2021. The guardrails will apply to any product or service that employs an artificial intelligence system. The act will categorize AI systems into four levels of risk, ranging from minimal to intolerable. Riskier applications will have to meet more stringent rules. Such as being more transparent and using correct data. Consider it a “risk management system for AI,” according to Johann Laux, an expert at the Oxford Internet Institute.
The Risks
One of the EU’s key priorities is to preserve basic rights and values while guarding against potential AI hazards to health and safety.
That implies that some AI applications, such as “social scoring” systems. That assess individuals based on their conduct or interactive talking toys that encourage harmful behavior, are strictly prohibited.
Predictive policing techniques, which crunch data to estimate where and who will conduct crimes, are expected to be prohibited. Except for a few narrow situations, such as preventing a specific terrorist danger, remote facial recognition is useless. The gadget scans passers-by and uses artificial intelligence to match their faces to a database. The vote on Thursday will determine the scope of the prohibition.
The goal is to “avoid a controlled society based on AI,” said Brando Benifei, an Italian senator who is helping to spearhead the European Parliament’s AI work. “We believe that these technologies could be used for evil as well as good, and we believe the risks are too great.”
AI systems employed in high-risk categories such as work and education, which could effect a person’s life, must meet stringent regulations such as being honest with users and implementing risk assessment and mitigation procedures.
According to the EU’s executive arm, most AI systems, such as video games or spam filters, are low- or no-risk.