AI: how countries are behind in the regulation race – 12/06/2023 – Tech

AI: how countries are behind in the regulation race – 12/06/2023 – Tech

[ad_1]

When European Union leaders introduced a 125-page bill to regulate artificial intelligence in April 2021, they treated the proposal as a global model for dealing with the technology.

EU lawmakers received input from thousands of experts for three years on AI, when the topic was not even under discussion in other countries. The result was a “remarkable” text that was “future-proof”, declared Margrethe Vestager, head of digital policy for the 27-nation bloc.

And then came ChatGPT.

The eerily human-like chatbot went viral last year for generating its own responses to requested requests and caught EU policymakers by surprise.

The type of AI powering ChatGPT was not mentioned in the bill and was not a major focus of policy discussions.

Lawmakers and their aides exchanged calls and texts to address the gap, while technology executives warned that overly aggressive regulations could put Europe at an economic disadvantage.

Even now, EU lawmakers are arguing over what to do, putting the law at risk. “We will always be lagging behind the speed of technology,” said Svenja Hahn, a member of the European Parliament who was involved in drafting the AI ​​law.

Lawmakers and regulators in Brussels, Washington and elsewhere are losing the battle to regulate AI and are racing to catch up as concerns grow that the powerful technology will automate jobs, accelerate the spread of misinformation and eventually develop its own kind. of intelligence.

Nations have moved quickly to address the potential dangers of AI, but European officials have been caught off guard by the technology’s evolution, while U.S. lawmakers openly admit they barely understand how it works.

The result has been varied. President Joe Biden issued an executive order in October on the national security effects of AI, while lawmakers debate what measures, if any, should be taken.

Japan is drafting non-binding guidelines for the technology, while China has imposed restrictions on certain types of AI. The UK has stated that existing laws are adequate to regulate the technology. Saudi Arabia and the United Arab Emirates are investing government money in AI research.

At the root of fragmented actions is a fundamental incompatibility. AI systems are advancing so quickly and unpredictably that lawmakers and regulators cannot keep up.

This gap has been exacerbated by a deficit of AI knowledge in governments, labyrinthine bureaucracies and fears that too many rules could limit, even if unintentionally, the benefits of the technology.

Even in Europe, perhaps the world’s most aggressive technology regulator, AI has confounded policymakers.

The European Union has pressed ahead with its new law despite disputes over how to deal with the makers of the latest AI systems.

A final agreement could restrict certain risky uses of the technology and create transparency requirements about how the underlying systems work.

But even if it is approved, it is not expected to take effect for at least 18 months — an eternity in AI development — and how it will be enforced remains unclear.

“The verdict is still out on whether you can regulate this technology or not,” said Andrea Renda, a senior researcher at the Center for European Policy Studies, a think tank in Brussels. “There is a risk that this EU text will end up being prehistoric.”

The absence of rules left a vacuum. Google, Meta, Microsoft and OpenAI, which created ChatGPT, have been left to regulate themselves as they race to create and profit from advanced AI systems.

Many companies, preferring non-binding codes of conduct that provide flexibility to accelerate development, are lobbying to soften proposed regulations and pitting governments against each other.

Without united action soon, some officials have warned that governments could fall even further behind AI creators and their discoveries.

“No one, not even the creators of these systems, knows what they will be able to do,” said Matt Clifford, an adviser to Prime Minister Rishi Sunak of the United Kingdom who chaired an AI Security Summit last month with 28 countries. . “The urgency comes from there, from there being a real question about whether governments are prepared to deal with and mitigate the risks.”

Europe Takes the Lead

In mid-2018, 52 academics, computer scientists and lawyers met at the Crowne Plaza hotel in Brussels to discuss artificial intelligence.

EU officials selected them to provide advice on the technology, which was gaining attention for powering self-driving cars and facial recognition systems.

The group debated whether there were already enough European rules to protect against the technology and considered potential ethical guidelines, said Nathalie Smuha, a legal scholar in Belgium who coordinated the group.

But as they discussed the possible effects of AI — including facial recognition technology’s threat to people’s privacy — they recognized “that there were all these legal loopholes, and what happens if people don’t follow those guidelines?” she said.

In 2019, the group published a 52-page report with 33 recommendations, including greater oversight of AI tools that could harm individuals and society.

Ursula von der Leyen, president of the European Commission, made the issue a priority on her digital agenda. A group of 10 people was assigned to develop the group’s ideas and draft a law.

Another committee in the European Parliament, the European Union’s legislative branch, has held nearly 50 hearings and meetings to consider the effects of AI on cybersecurity, agriculture, diplomacy and energy.

In 2020, European policymakers decided that the best approach was to focus on how AI was used rather than the underlying technology. They said that AI was not intrinsically good or bad; it depended on how it was applied.

Therefore, when the AI ​​Act was unveiled in 2021, it focused on “high-risk” uses of the technology, including law enforcement, school admissions, and hiring. The act largely avoided regulating the AI ​​models that powered them unless they were listed as dangerous.

Under the proposal, organizations offering risky AI tools must meet certain requirements to ensure these systems are secure before they are deployed.

The AI ​​software that created manipulated videos and “deepfake” images must disclose that people are viewing AI-generated content.

Other uses have been banned or restricted, such as real-time facial recognition software. Violators can be fined 6% of their global sales. Some experts warned that the bill did not sufficiently consider future AI twists.

Washington’s Game

Jack Clark, founder of AI startup Anthropic, had been visiting Washington for years to teach AI classes to lawmakers. Almost always, only a few congressional aides showed up.

But after ChatGPT went viral, his presentations were packed with lawmakers and aides clamoring to hear his crash course on AI and views on rulemaking.

“Everyone has kind of woken up to this technology,” said Clark, whose company recently hired two Washington lobbying firms.

Without technical knowledge, policymakers are increasingly relying on Anthropic, Microsoft, OpenAI, Google and other AI makers to explain how it works and help create rules.

“We’re not experts,” said Rep. Ted Lieu, D-Calif., who hosted OpenAI CEO Sam Altman and more than 50 lawmakers at a dinner in Washington in May. “It’s important to be humble.”

Tech companies have seized their advantage. In the first half of the year, many of Microsoft and Google’s combined 169 lobbyists met with lawmakers and the White House to discuss AI legislation, according to disclosures.

OpenAI has registered its first three lobbyists, and a technology lobbying group launched a $25 million campaign to promote the benefits of AI this year.

Over the same period, Altman met with more than 100 members of Congress, including former House Speaker Kevin McCarthy, a California Republican, and Senate leader Chuck Schumer, a New York Democrat.

After testifying before Congress in May, Altman embarked on a 17-city global tour, meeting world leaders including President Emmanuel Macron of France, Sunak and Prime Minister Narendra Modi of India.

In Washington, activity around AI has been frenetic — but with no legislation to show for it.

In May, following a White House meeting on AI, the leaders of Microsoft, OpenAI, Google and Anthropic were asked to draft self-regulations to make their systems more secure, said Brad Smith, president of Microsoft. After Microsoft sent suggestions, Commerce Secretary Gina Raimondo returned the proposal with instructions to add more promises, he said.

Two months later, the White House announced that the four companies had agreed to voluntary commitments on AI safety, including testing their systems through third-party supervisors — which most companies were already doing.

“It was brilliant,” said Smith. “Instead of people in government coming up with ideas that might have been impractical, they said, ‘Show us what you think you can do, and we’ll push you to do more.'”

Over the summer, the Federal Trade Commission opened an investigation into OpenAI and how it handles user data. Lawmakers continued to host technology executives.

In September, Schumer hosted Elon Musk, Meta’s Mark Zuckerberg, Google’s Sundar Pichai, Microsoft’s Satya Nadella and Altman in a closed-door meeting with lawmakers in Washington to discuss AI rules. Musk warned about the “civilizational” risks of AI, while Altman proclaimed that AI could solve global problems like poverty.

Fleeting Collaboration

In May, Vestager, Raimondo and Antony Blinken, US Secretary of State, met in Lulea, Sweden, to discuss cooperation on digital policy.

After two days of talks, Vestager announced that Europe and the United States would launch a shared code of conduct to protect AI “within weeks.” She sent messages to colleagues in Brussels asking them to share her social media post about the pact, which she called a “huge step in a race we cannot afford to lose.”

Months later, no shared code of conduct had been released. Instead, the United States announced its own AI guidelines.

Little progress has been made internationally regarding AI. With countries mired in economic competition and geopolitical distrust, many are setting their own rules for borderless technology.

Some policymakers said they expected progress at an AI security summit held by the United Kingdom last month at Bletchley Park, where mathematician Alan Turing helped crack the Enigma code used by the Nazis. The meeting was attended by Vice President Kamala Harris; Wu Zhaohui, vice minister of science and technology of China; Musk; and others.

The result was a 12-paragraph statement outlining AI’s “transformative” potential and the “catastrophic” risk of misuse. Participants agreed to meet again next year. Negotiations ultimately resulted in an agreement to continue negotiating.

[ad_2]

Source link

tiavia tubster.net tamilporan i already know hentai hentaibee.net moral degradation hentai boku wa tomodachi hentai hentai-freak.com fino bloodstone hentai pornvid pornolike.mobi salma hayek hot scene lagaan movie mp3 indianpornmms.net monali thakur hot hindi xvideo erovoyeurism.net xxx sex sunny leone loadmp4 indianteenxxx.net indian sex video free download unbirth henti hentaitale.net luluco hentai bf lokal video afiporn.net salam sex video www.xvideos.com telugu orgymovs.net mariyasex نيك عربية lesexcitant.com كس للبيع افلام رومانسية جنسية arabpornheaven.com افلام سكس عربي ساخن choda chodi image porncorntube.com gujarati full sexy video سكس شيميل جماعى arabicpornmovies.com سكس مصري بنات مع بعض قصص نيك مصرى okunitani.com تحسيس على الطيز