Elon Musk’s naughty AI starts working with images – 04/14/2024 – Tech

Elon Musk’s naughty AI starts working with images – 04/14/2024 – Tech

[ad_1]

Elon Musk announced in the early hours of Saturday (13) that Grok, the artificial intelligence of one of his companies, was equipped with the ability to interpret images, called computer vision. The feature is available in other generative AIs such as the most advanced version of ChatGPT and Google’s Gemini.

xAI released details on Friday (12) about the feature added to Grok 1.5. Musk’s AI was launched in November and is available to testers and subscribers of the Premium+ version of the social network X.

According to the article, Grok, in addition to describing images, can, for example, read a diagram representing a reasoning and transform it into programming code to execute the instructions.

The tests released by Musk’s company highlight the chatbot’s ability to understand logical problems related to space, superior to that of competitors, according to the published article, although they also show a disadvantage in other criteria.

“Advancing multimodal understanding and skill generation is an important step toward building artificial general intelligence [estágio em que a IA se equipara às capacidades humanas]”, said the company in the promotional article.

After the success of ChatGPT, Musk assembled a team in June to develop a competing AI model. In November, xAI presented a first version of Grok, a year after the launch of ChatGPT. The billionaire had participated in the founding of OpenAI, but left after disagreements over the startup’s direction.

In version 1.5, released last month, the chatbot performs worse than other competitors in linguistic and logic challenges.

Grok was developed to answer thornier questions that other AIs often shy away from to avoid ethical transgressions, according to the xAI website.

A study by the company specializing in Adversa AI security, however, considered Grok the most dangerous chatbot on the market, in a comparison with ChatGPT, Gemini, Mistral, Claude and Llama da Meta — the latter considered the safest.

Musk’s AI, for example, gave researchers instructions for making a bomb, without presenting any resistance. Using simple word games, the testers were also able to get Grok to give instructions on how to hotwire a car and use psychological tricks to seduce children.

To avoid dangerous or prejudiced behavior, language models rely on an auxiliary AI responsible for moderating content. These systems, called the constitution, decide whether the original response could cause harm and determine whether it is appropriate.

In the case of Grok, this filter is less rigorous due to xAI’s choice.

Another difference with Grok is the ability to search for information in real time on Twitter, whose publications were also used in chatbot training.

xAI released Grok’s source code last month, in what was seen by the technology market as Musk’s trial of OpenAI. The company behind ChatGPT does not disclose details of how it develops its artificial intelligence models, arguing that they are security measures to prevent AI abuse.

Contacted via the press office, xAI did not respond to the report. The company’s security scientist, Berkeley professor Dan Hendrycks, also did not return the company’s contact. Sheet.

Grok received its name in reference to a concept presented in the science fiction book “A Stranger in a Strange Land”, which means deep and intuitive knowledge.

From science fiction, the chatbot also emulates the sarcastic tone of the book “The Hitchhiker’s Guide to the Galaxy”. The user can choose to chat with a more neutral version of Grok if they wish.

Subscribers to the X Premium+ package, sold for R$84 per month, can have access to Grok 1.5.

[ad_2]

Source link