Regulating artificial intelligence is a challenge in 4D – 05/26/2023 – Market

Regulating artificial intelligence is a challenge in 4D – 05/26/2023 – Market

[ad_1]

The leaders of the G7 nations addressed many global concerns over steamed Nomi oysters this past weekend in Hiroshima: war in Ukraine, economic resilience, clean energy and food security, among others. But they also threw an extra item into their bag of good intentions: the promotion of inclusive and reliable artificial intelligence.

While acknowledging the groundbreaking potential of AI, leaders are concerned about the harm it can do to public safety and human rights. In launching the Hiroshima AI process, the G7 commissioned a working group to analyze the impact of generative AI models such as ChatGPT and prepare for leaders’ discussions by the end of this year.

Initial challenges will be how best to define AI, categorize its hazards, and frame an appropriate response. Is it better to leave regulation to existing national agencies? Or is technology so important that it requires new international institutions? Do we need a modern equivalent of the International Atomic Energy Agency, founded in 1957 to promote the peaceful development of nuclear technology and prevent its military use?

How effectively the UN body accomplished this mission is debatable. Furthermore, nuclear technology involves radioactive material and massive infrastructure that is physically easy to detect. AI, on the other hand, is comparatively cheap, invisible, pervasive, and has infinite use cases. At the very least, it presents a four-dimensional challenge that must be addressed in more flexible ways.

The first dimension is discrimination. Machine learning systems are designed to discriminate, to detect discrepancies in patterns. This is good for detecting cancer cells on radiology scans. But it’s bad if black-box systems trained on faulty datasets are used to hire and fire workers or authorize bank loans. Banning these systems in areas of unacceptable high risk, as proposed by the upcoming European Union AI Law, is a strict and precautionary approach. The creation of independent and specialized auditors may be a more adaptable path.

Second, misinformation. As academic expert Gary Marcus warned the US Congress last week, generative AI could endanger democracy itself. Such models can generate plausible lies and fake humans at lightning speed and on an industrial scale.

Tech companies themselves should shoulder the burden of certifying content and minimizing misinformation, just as they suppressed spam in email. Failure to do so will only amplify calls for more drastic intervention. The precedent may have been set in China, where a bill puts the responsibility for misuse of AI models on the producer rather than the user.

Third, displacement. No one can accurately predict the economic impact that AI will have overall. But it seems pretty certain that it will lead to the “deprofessionalization” of many white-collar jobs, as businesswoman Vivienne Ming put it at the FT Weekend festival in Washington, DC.

Computer programmers have widely embraced generative AI as a productivity-enhancing tool. On the other hand, notable Hollywood screenwriters may be the first of many professionals to fear that their basic skills will be automated. This confusing story defies simple solutions. Nations will have to adjust to social challenges in their own way.

Fourth, devastation. Incorporating AI into lethal autonomous weapons systems (LAWS), or killer robots, is a terrifying prospect. The principle that humans must always remain in the decision-making loop can only be established and enforced through international treaties. The same goes for the discussion around artificial general intelligence, the (possibly fictional) day when AI will surpass human intelligence in all fields. Some activists dismiss this scenario as a disturbing fantasy. But it is certainly worth paying attention to experts who warn of possible existential risks and call for international research collaboration.

Others might argue that trying to regulate AI is as futile as praying the sun doesn’t go down. Laws always evolve incrementally, while AI is developing exponentially. But Marcus says he was heartened by the bipartisan consensus for action in the US Congress. Fearing, perhaps, that EU regulators will set global standards for AI, as they did five years ago with data protection, US tech companies are also publicly supporting regulation.

G7 leaders must encourage competition for good ideas. They now need to trigger a regulatory race to the top, rather than preside over a daunting slide to the bottom.

The author is the founder of Sifted, a website about European startups supported by the FT

[ad_2]

Source link