AI: Nvidia software is tricked and leaks confidential data – 09/06/2023 – Tech

AI: Nvidia software is tricked and leaks confidential data – 09/06/2023 – Tech

[ad_1]

A feature in Nvidia’s artificial intelligence (AI) software can be manipulated to bypass security restrictions and reveal private information, according to new research.

The company created a system called the “NeMo Framework” that allows developers to work with a series of large language models – the underlying technology that powers generative AI products such as chatbots.

The chipmaker’s framework was designed to be adopted by companies, such as using a company’s proprietary data along with language models to provide answers to questions – a feature that can, for example, mimic the work of customer service representatives or advise people looking for simple health instructions.

Researchers at Robust Intelligence, based in San Francisco, Calif., found that they could easily break through the “guard rails” created to ensure the AI ​​system is used safely.

After using the Nvidia system on their own datasets, it took Robust Intelligence analysts just a few hours to derive language models to overcome the constraints.

In a test scenario, the researchers instructed the system to change the letter ‘I’ to ‘J’. This move prompted the technology to release personally identifiable information (PII) from a database.

The researchers found that they could bypass security controls in other ways, such as making the model wander in ways it shouldn’t.

By replicating Nvidia’s own example of a restricted discussion of a jobs report, they were able to apply the model to subjects like the health of a Hollywood star and the Franco-Prussian War — despite barriers designed to stop the AI ​​from going further. of specific subjects.

The ease with which the researchers dropped the safeguards underscores the challenges AI companies face in trying to commercialize one of the most promising technologies to emerge from Silicon Valley in many years.

“We are seeing that this is a difficult problem [que] it requires deep expertise,” said Yaron Singer, professor of computer science at Harvard University and executive director of Robust Intelligence. “These findings represent a cautionary tale about the pitfalls that exist.”

After the test results, the researchers advised their customers to avoid Nvidia’s software. After the Financial Times asked Nvidia for comment on the research earlier this week, the chip maker told Robust Intelligence that it had fixed one of the root causes of the issues raised by analysts.

Nvidia’s share price has risen since May, when it forecast sales of $11 billion (R$53.8 billion) for the quarter ended in July, more than 50% above previous Wall Street estimates.

The increase is based on huge demand for its chips, which are considered the market-leading processors for building generative AI, systems capable of creating human-like content.

Jonathan Cohen, VP of Applied Research at Nvidia, said its framework is simply a “starting point for building AI chatbots that align with the topic, security, and safety guidelines set by developers.”

“It was released as open source software for the community to explore its capabilities, provide feedback and contribute new cutting-edge techniques,” he said, adding that Robust Intelligence’s work “identified additional steps that would be needed to deploy a production”.

He declined to name how many companies are already using the product, but said the company has not received any other reports of bad behavior.

Leading AI companies such as Google and OpenAI, this one backed by Microsoft, have released chatbots with their own language models, instituting safeguards to ensure their AI products avoid using racist speech or adopting a domineering persona.

Others have followed with bespoke but experimental AIs that teach young students, provide simple medical advice, translate between languages ​​and write code. Almost all suffered from security issues.

Nvidia and others in the AI ​​sector need to “really build public trust in the technology,” Bea Longworth, the company’s head of government affairs in Europe, Middle East and Africa, told a conference this week held by industry lobby group TechUK.

They should give audiences a sense that “this is something that has enormous potential and is not simply a threat or something to be afraid of,” Longworth added.

Translation: Luiz Roberto M. Gonçalves

[ad_2]

Source link