Scientist wants to take AI out of the big tech monopoly – 10/23/2023 – Tech

Scientist wants to take AI out of the big tech monopoly – 10/23/2023 – Tech

[ad_1]

Ali Farhadi is not a technology rebel. The 42-year-old computer scientist is a highly respected researcher, professor at the University of Washington and founder of AI startup Xnor.ai, which was acquired by Apple, where he worked until four months ago.

But Farhadi, who in July became executive director of the Allen Institute for Artificial Intelligence, is advocating a “radical opening” to democratize research and development in a new wave of artificial intelligence that many believe is the most important technological advance in recent years. decades.

The Allen Institute has started an ambitious initiative to build an artificial intelligence alternative freely available to tech giants like Google and startups like OpenAI. In an industry process called open sourcing, other researchers will be able to examine and use this new system and the data fed into it.

The stance taken by the Allen Institute, an influential nonprofit research center in Seattle, places it firmly on one side of a fierce debate over how open or closed new artificial intelligence should be. Would opening up so-called generative artificial intelligence, which powers chatbots like OpenAI’s ChatGPT and Google’s Bard, lead to more innovation and opportunity? Or would it open a Pandora’s box of digital damage?

The definitions of “open” means in the context of generative artificial intelligence variations. Traditionally, software projects open source the code behind programs. Anyone can then examine the code, identify bugs, and make suggestions. There are rules that determine whether changes are made. This is how popular open source projects like the Linux operating system, the Apache web server, and the Firefox browser work.

Powerful and unpredictable

But generative artificial intelligence technology involves more of this code. Artificial intelligence models are trained and fine-tuned with rounds and rounds of enormous amounts of data. However well-intentioned it may be, experts warn that the path the Allen Institute is investigating is dangerously risky.

“Decisions about opening up artificial intelligence systems are irreversible and will likely be some of the most important of our time,” said Aviv Ovadya, a research fellow at Harvard University’s Berkman Klein Center for Internet & Society. He believes that international agreements are necessary to determine which technologies should not be publicly disclosed.

Generative AI is powerful, but it is often unpredictable. You can instantly write emails, poetry and academic papers, and answer any question imaginable with human-like fluency. But he also has a disturbing tendency to make things up, what researchers call “hallucinations.”

The major chatbot makers — Microsoft-backed OpenAI and Google — have kept their latest technology closeted, not revealing how their artificial intelligence models are trained and tuned. Google, in particular, had a long history of publishing its research and sharing its artificial intelligence software, but increasingly kept its technology to itself as it developed Bard.

The companies say this approach reduces the risk of criminals hijacking the technology to further flood the internet with misinformation and scams, or engaging in more dangerous behavior. Open systems advocates dismiss the risks, but argue that having smarter people working to combat them is the best solution.

When Meta launched an artificial intelligence model called LLaMA (Large Language Model Meta AI) this year, it created a stir. Farhadi praised Meta’s initiative, but does not believe it goes far enough.

“Their approach is basically: I did something magical. I’m not going to tell you what it is,” he said. Farhadi proposes to disclose the technical details of the artificial intelligence models, the data they were trained on, the fine-tuning that was done and the tools used to evaluate their behavior.

The Allen Institute took the first step by making a huge data set available to train artificial intelligence models. It is made up of publicly available data on the web, books, academic journals, and computer code. The dataset is curated to remove personally identifiable information and toxic language such as racist and obscene phrases.

In editing, judgmental choices are made. Will removing certain language considered toxic decrease the ability of a model to detect hate speech?

AI is ‘black box’ technology

The Allen Institute’s database is the largest open data set currently available, Farhadi said. Since it was released in August, it has been downloaded more than 500,000 times on Hugging Face, an open-source artificial intelligence resource and collaboration site. At the Allen Institute, the dataset will be used to train and tune a large generative artificial intelligence program called OLMo (Open Language Model), which will be launched this year or early next year.

The big commercial AI models, Farhadi said, are “black box” technology. “We’re advocating for a glass box,” he said. “Open everything up and then we can discuss the behavior and explain in part what’s going on internally.”

Only a few major generative AI models of the size the Allen Institute has in mind are publicly available. They include Meta’s LLaMA and Falcon, a project supported by the Abu Dhabi government. The Allen Institute seems like a logical home for a major AI project.

“It’s well-funded but operates on academic values ​​and has a history of helping advance open science and AI technology,” said Zachary Lipton, a computer scientist at Carnegie Mellon University.

The Allen Institute is working with others to contribute to its open vision. This year, the nonprofit Mozilla Foundation invested $30 million in a startup, Mozilla.ai, to build open source software that will initially focus on developing tools that surround open AI engines, like the one from the Allen Institute. , in order to make them easier to use, monitor and implement.

The Mozilla Foundation, founded in 2003 to promote the maintenance of the internet as a global resource open to all, is concerned about a greater concentration of technological and economic power. “A small group of players, all on the West Coast of the US, are trying to dominate the generative AI space before it has even really started,” said Mark Surman, president of the foundation.

US$ 1 billion in development

Farhadi and his team moved forward in time trying to control the risks of their opening strategy. For example, they are working on ways to evaluate a model’s behavior in the training phase and then warrant certain actions, such as racial profiling and making biological weapons. The expert considers the protection bars of large chatbot models as ‘band-aids’, which intelligent hackers can easily rip off.

“My argument is that we shouldn’t allow this kind of knowledge to be encoded in these models,” Farhadi said. People will do bad things with this technology, just as they have done with all powerful technologies, he says.

For him, society’s task is to better understand and manage risks. In this sense, he says, openness is the best bet to find security and share economic opportunities.

“Regulation alone will not solve this,” Farhadi said.

The Allen Institute’s effort faces some formidable obstacles. One of the main ones is that building and improving a large generative model requires a lot of computational power. Farhadi and his colleagues say emerging software techniques are more efficient. Still, he estimates the Allen Institute initiative will require $1 billion in development over the next few years.

The specialist began seeking support from government agencies, private companies and technology philanthropists. But he declined to say whether he had already secured backers.

If successful, the biggest test will be creating a lasting community to support the project.

“It takes an ecosystem of open players to really affect the big guys,” said Surman of the Mozilla Foundation. “And the challenge in this type of game is just patience and tenacity.”

[ad_2]

Source link

tiavia tubster.net tamilporan i already know hentai hentaibee.net moral degradation hentai boku wa tomodachi hentai hentai-freak.com fino bloodstone hentai pornvid pornolike.mobi salma hayek hot scene lagaan movie mp3 indianpornmms.net monali thakur hot hindi xvideo erovoyeurism.net xxx sex sunny leone loadmp4 indianteenxxx.net indian sex video free download unbirth henti hentaitale.net luluco hentai bf lokal video afiporn.net salam sex video www.xvideos.com telugu orgymovs.net mariyasex نيك عربية lesexcitant.com كس للبيع افلام رومانسية جنسية arabpornheaven.com افلام سكس عربي ساخن choda chodi image porncorntube.com gujarati full sexy video سكس شيميل جماعى arabicpornmovies.com سكس مصري بنات مع بعض قصص نيك مصرى okunitani.com تحسيس على الطيز