Europe proposes strict rules for artificial intelligence

The European Union on Wednesday unveiled strict regulations to regulate the use of artificial intelligence, a first-of-its-kind policy outlining how companies and governments can use technology that is considered one of the most important but ethically charged scientific breakthroughs in recent memory.

The draft rules would set limits for the use of artificial intelligence in various activities, from self-driving cars to rental decisions, bank loans, school enrollments and exams. It will also cover the use of artificial intelligence by law enforcement and court systems – areas considered high risk because it could threaten people’s safety or fundamental rights.

Some uses will be completely banned, including live face recognition in public spaces, although there are several exemptions for national security and other purposes.

The 108-page policy is an attempt to regulate an emerging technology before it becomes mainstream. The rules have far-reaching implications for major technology companies, including Amazon, Google, Facebook and Microsoft, which have poured in resources to develop artificial intelligence, but also numerous other companies that use the software to develop medicines, underwrite insurance policies and assess creditworthiness. . Governments have used versions of the technology in criminal law and the allocation of public services such as revenue support.

Companies that violate the new regulations, which could take several years to move through the European policy process, could face fines of up to 6 percent of global sales.

“On artificial intelligence, trust is a must, not a pleasure to have,” Margrethe Vestager, executive vice president of the European Commission, which oversees the digital policies of the 27-nation bloc, said in a statement. said. “With these distinctive rules, the EU is at the forefront of developing new global standards to ensure that AI can be trusted.”

According to European regulations, companies that provide artificial intelligence in high-risk areas must provide regulators with proof of their safety, including risk assessments and documentation that explains how technology makes decisions. The businesses must also guarantee human oversight of how the systems are created and used.

Some applications, such as chatbots that deliver a human conversation in customer service situations, and software that is difficult to detect manipulated images such as ‘deepfakes’, require users to make it clear that they are computer generated.

For the past ten years, the European Union has been the world’s most aggressive watchdog for the technology industry, with its policies often used by other countries as blueprints. The bloc has already introduced the world’s most intrusive regulations on data privacy and is debating additional antitrust and content moderation laws.

But Europe is no longer alone in pursuing stricter supervision. The largest technology companies are now facing a broader reckoning with governments around the world, each with their own political and policy motives, to limit the power of the industry.

In the United States, President Biden has filled his administration with critics in the industry. Britain is creating a technological regulator to police the industry. India tightens social media surveillance. China has targeted local technology giants such as Alibaba and Tencent.

The outcomes in the coming years may reform how the global internet works and how new technologies are used, with people having access to different content, digital services or online freedoms, based on where they are located.

Artificial intelligence – where machines are trained to perform work and make their own decisions by studying large amounts of data – is considered by technologists, business leaders and government officials to be one of the world’s most transformative technologies, promising huge gains in productivity.

But as systems become more sophisticated, it can be harder to understand why the software decides, a problem that can get worse as computers become more powerful. Researchers have raised ethical questions about its use, suggesting that it may perpetuate existing prejudices in society, invade privacy, or lead to more work that works automatically.

The release of the draft law by the European Commission, the executive body of the bloc, provoked mixed reaction. Many industry groups expressed relief that the regulations were not stricter, while civil society groups said they had to go further.

“There’s been a lot of discussion over the last few years about what it would mean to regulate AI, and the fallback option so far has been to do nothing and wait to see what happens,” said Carly Kind, director of Ada Lovelace. said. Institute in London, which studies the ethical use of artificial intelligence. “This is the first time any country or regional bloc has tried.”

Mrs. Kind said many people are concerned that the policy is too broad and that businesses and technology developers have left too much discretion to regulate themselves.

“If it does not lay down strict red lines and guidelines and very firm boundaries about what is acceptable, it can be very much interpreted,” she said.

The development of equitable and ethical artificial intelligence has become one of the most controversial issues in Silicon Valley. In December, the co-leader of a team at Google that studies the ethical use of the software said she was fired because she criticized the company’s lack of diversity and the prejudices built into modern artificial intelligence software. has. Debates have raged in Google and other companies over the sale of the latest software to governments for military use.

In the United States, government authorities are also considering the risks of artificial intelligence.

The Federal Trade Commission warned this week against selling artificial intelligence systems that use racially biased algorithms, or ‘deny people who work, housing, credit, insurance or other benefits.’

Elsewhere, in Massachusetts, and in cities like Oakland, California; Portland, ore; and San Francisco, governments have taken steps to curb police use of facial recognition.

Source