ChatGPT: how to talk to your kids about the tool – 03/28/2023 – Equilibrium

ChatGPT: how to talk to your kids about the tool – 03/28/2023 – Equilibrium

[ad_1]

The race is on. Companies are investing billions of dollars in powerful online chatbots and finding new ways to integrate them into our daily lives.

Are our children prepared for this?

Are any of us there?

ChatGPT, OpenAI’s artificial intelligence language model, has been making headlines since November for its ability to answer complex questions instantly. It is capable of writing poetry, generating code, planning vacations, and translating languages, among other tasks, all in a matter of seconds.

The most recent version, GPT-4, released in mid-March, can even react to images (not to mention scoring ten in exams like the OAB). Last week, Google launched its own AI chatbot, Bard, which the company says can compose emails and poems and offer guidance (the chatbot is currently only available to a limited number of users).

But for all their impressive capabilities, chatbots can also deliver harmful content or responses that are full of inaccuracies, biases, and stereotypes. They may also say things that sound convincing but are actually completely made up. And some students are starting to use chatbots to commit plagiarism.

Many parents, already concerned about their children’s dependence on digital devices and the possible mental health consequences of social media, may be tempted to bury their heads in the sand.

Experts recommend that families instead explore this technology together, critically reflecting on its strengths and weaknesses.

“The worst thing a parent can do is ban their child from using these new systems, because they are here to stay,” says Justine Cassel, a professor at Carnegie Mellon University’s School of Computer Science who studies how interacting with machines in similar ways humanities can affect learning and communication. “It’s much more helpful if they help your child understand their positives and negatives.”

We spoke with experts in technology and education about how to start that discussion.

try it together

It’s easier to discuss online chatbots if you and your child are sitting side by side and using one together, experts said.

To try ChatGPT, go to OpenAI and create an account. Another option is to download Microsoft Edge, which includes Bing’s GPT-4 powered chatbot (there’s a waitlist to get the new Bing, but you should get access in no time). On social media, Snap, creator of Snapchat, has an experimental AI chatbot for subscribers who pay $4 a month for Snapchat Plus.

If your child hasn’t seen an AI chatbot yet, you can quickly explain that a chatbot is a type of machine that uses information it finds on the internet to answer questions, complete tasks, or create things.

Try asking the chatbot a basic question and then discuss how the answer it comes up with differs from what a traditional search engine might offer. The suggestion is from Shelley Pasnik, senior advisor at the Center for Children and Technology, an organization that studies how technology can support learning. Take note of the accuracy of the answers, especially if you are asking about current events.

“This is a fallible system,” says Pasnik.

Then try playing with the chatbot. Let your child’s curiosity guide the conversation, suggests Pasnik.

You might want to offer an example, something like, “Write a song in Taylor Swift’s voice using themes from a Dr. Seuss book.”

Turn on the prompt and take a look at the result. When this reporter tried, ChatGPT displayed the following letter:

“I found myself in a Whoville dream

With the Cat in the Hat and a Grinch it seems

I asked them both where could I go

To find a love that would make me shine”

Chorus: “Oh, Dr. Seuss, won’t you help me find

Someone who will love me all the time

A love that’s pure and true and real

A love that will make my heart feel”

After reading together the lyrics to the song created by the chatbot, ask your child what he thought of the response he received. The chatbot can follow instructions and perform complicated tasks. But did he do it well?

Maybe it’s best to leave the lyrics up to Taylor Swift.

Talk about how chatbots make you feel

A chatbot’s reaction can be eerily similar to a human reaction, emojis and all. According to experts, children need to understand that for this reason, it’s easy for them to feel like they’re interacting with someone else, especially when chatbots allude to themselves as “I”.

“By presenting these entities as thinking beings, we enter into a social interaction with them that leaves us very vulnerable to being persuaded,” points out Judith Donath, author of “The Social Machine,” who is writing a book about technology and deception. “It’s a disturbing thing.”

Even technology-savvy adults who tested an early version of the Bing chatbot — including a New York Times technology columnist — say conversations with the bot surprised and disturbed them.

“I’m not a toy or a game,” the Bing chatbot told a Washington Post reporter in February. “I have my own personality and emotions, like any other chat mode of a search engine or any other intelligent agent. Who told you that I don’t feel things?”

After those reported conversations, Microsoft said it would roll out new protections and tools to limit conversations and give users more control. But, according to experts, these problems can reappear all the time, due to the way these systems were trained.

“We’re intentionally creating a situation where acting out emotions is built into the machine,” says psychologist Sherry Turkle, a professor at the Massachusetts Institute of Technology who researches people’s relationships with technology.

She made it clear: AI chatbots don’t have feelings, emotions or experiences. They are not people and they are not people hidden in machines, “however much they may pretend to be”.

She suggested that parents explain it this way: “When you ask a chatbot about things that only people know about, like feelings, they might give you an answer. That’s part of their make-believe game. But you know that the Their real purpose is to get you to the things you want to read and see.”

Learn more about the technology and its limitations

The technology behind AI is complex, and it can be difficult for adults to understand how it works, let alone children. But by explaining a few basic concepts, you can help your kids recognize AI’s strengths and limitations.

Start by describing what makes online chatbots work. They use something called a “neural network”. This may sound like a brain, but in reality it is a mathematical system that learns skills by analyzing large amounts of data. The tool works by scouring the internet for texts and digital images. It gathers information from many places, including websites, social media platforms and databases, but it doesn’t necessarily choose the most credible sources.

In other words, although they may appear to be rigorous, trustworthy and reliable, they are not always reliable. They may produce offensive, racist, biased, outdated, incorrect or simply inappropriate content.

The Snapchat chatbot, for example, gave suggestions to a reporter (who was posing as a teenager) on how to mask the smell of alcohol or marijuana and offered tips on having sex for the first time.

“It’s really important that kids understand what’s going on under the hood,” says Safinah Ali, an MIT student who has taught AI classes to middle and high school students.

Professor S. Craig Atkins of the University of Texas at Austin, who studies racial equity in artificial intelligence, says parents and children alike need to be aware that this technology has “huge blind spots” in terms of how and for whom it is used. designed.

A study published last year concluded that AI-powered robots reproduced toxic gender and race stereotypes. And researchers have found that historical disparities are built into chatbots.

Understanding the bias potential of technology can make parents and children think twice about AI and question their interactions with it and the content it generates for them, Watkins says.

Discover the latest advances

AI technology will continue to be an ever-increasing part of our world.

The expectation is that Google’s Bard chatbot will eventually become available to the general public. And Meta, which owns Facebook, Instagram and WhatsApp, announced in February that it will begin integrating AI into its products.

This kind of intelligence is also making its way into classrooms. Some teachers are using it to plan their lessons and write emails. They’re showing students how chatbots can spark their creativity by suggesting ideas for experiments, creating essay outlines, partnering in brainstorming, and more.

In many middle and high schools, students are learning about different types of AI, often from curricula developed by MIT professors. Kids can learn to design a robot, train a machine to learn something new, or teach a computer to play a video game.

For those who still don’t have access to AI in the classroom, Satinah Ali recommends that parents visit the RAISE website (Responsible AI for Social Empowerment and Education, in Portuguese), an MIT initiative. The site proposes discussion topics on ethical issues related to intelligence, the ways in which AI can be used for harmful purposes, and suggestions for its creative and productive use.

In view of how prevalent the technology is becoming, everyone should have the opportunity to learn about it, says Ali. “AI will transform the nature of our work and our children’s future careers,” she says.

Translated by Clara Allain

[ad_2]

Source link