Digital tests put ‘real world’ at risk – 03/11/2024 – Market

Digital tests put ‘real world’ at risk – 03/11/2024 – Market

[ad_1]

You’re probably going through some testing right now. Organizations run countless experiments online, trying to learn how they can keep our eyes glued to the screen, convince us to buy a new product, or provoke a reaction to the latest news.

But they often do so without warning us — and with unintended, and sometimes negative, consequences.

In a recent study, my colleagues and I examined how it is possible for a digital job platform to determine your next job, salary, and visibility to potential employers. Such experiments are often carried out without the consent or knowledge of workers — and are widespread.

In 2022, a study reported in the New York Times found that the professional networking platform LinkedIn experimented on millions of users without their knowledge.

These tests had a direct effect on users’ careers, the authors stated, with many experiencing fewer opportunities to connect with potential employers.

Uber also did this with fare payments, which many drivers told media outlets led to a reduction in earnings. Testing of social media platforms has contributed to the polarization of online content and the development of “echo chambers,” according to research in the journal Nature.

And Google constantly tests search results, which German academics found was pushing spam sites to the top of its results.

The problem is not experimentation itself, which can serve to help companies make data-based decisions. It’s just that most don’t have internal or external mechanisms to ensure that experiments are clearly beneficial to their users as well as themselves.

Countries also lack strong regulatory frameworks to govern how organizations use online experiments and the side effects they can have. Without protections, the consequences of unregulated experimentation could be disastrous for everyone.

In our study, when workers found themselves unwitting guinea pigs, they expressed paranoia, frustration, and contempt at having their livelihoods subjected to experimentation without knowledge and consent. The consequences cascaded and affected their income and well-being.

Some declined to offer ideas for how the digital platform could improve. Others stopped believing that any change was real. Instead, they sought to limit their online engagement.

The impact of unregulated online experimentation is likely to become even more widespread and pronounced.

Amazon has been accused by US regulators of using experiments to raise product prices, stifle competition and increase user fees. Scammers use online and digital experimentation to prey on elderly and vulnerable people.

And now, generative artificial intelligence tools are reducing the cost of producing content for digital experimentation. Some organizations are even deploying technology that could allow them to test our brain waves.

The greater integration of experimentation represents what we call the “experimental hand” — which can have powerful effects on workers, users, customers, and society in ways that are poorly understood but can have serious consequences. Even with the best intentions, without multiple checks and balances, the consequences of this current culture can be disastrous for people and society.

But we don’t need to embrace a Black Mirror future in which our every movement, interaction, and thought is subject to exploratory experimentation. Organizations and policymakers would be wise to learn the lessons from the mistakes of scientists half a century ago.

The infamous 1971 Stanford Prison experiment, in which university psychology professor Philip Zimbardo randomly assigned participants to the role of prisoner or prison guard, quickly descended into guards subjecting prisoners to atrocious psychological abuse.

Despite observing these consequences, he did not stop the experiment. It was PhD student Christina Maslach, who had come to help conduct interviews, who expressed strong objections and contributed to its closure.

The lack of oversight over experiment design and implementation accelerated the adoption of Research Ethics Boards (IRBs) at universities. Its goal is to ensure that every experiment involving human subjects is conducted ethically and complies with the law, including obtaining informed consent from participants and allowing them to withdraw.

For IRBs to function beyond academia, organization leaders must ensure they include independent experts with diverse expertise to enforce the highest ethical standards.

But this is not enough. Facebook’s notorious 2012 experiment, in which it measured how users reacted to changes in positive or negative posts in their feed, was approved by Cornell University’s IRB. The social media platform claimed that users’ agreement to the terms of service constituted informed consent.

We also need collective responsibility to ensure that organizations implement ethically robust experiments. Users themselves are often the closest and most knowledgeable people to provide input. A diverse group should have a voice in the design of any experiment.

If organizations are unwilling to respond to user demands, then those exposed to the experiments could create their own platforms to stay informed. Workers on the outsourcing platform Amazon Mechanical Turk, known as MTurk, for example, formed Turkopticon to create a collective employer rating system when MTurk refused to provide them with ratings.

It should not take another Zimbardo experiment to encourage organizations and governments to institute safeguards for ethical experimentation. Nor should they simply wait for regulators to act. Maslach didn’t hesitate, and neither should we.

Tim Weiss, assistant professor of innovation and entrepreneurship at Imperial College London, and Arvind Karunakaran, assistant professor of management science and engineering at Stanford University, contributed to this article.

[ad_2]

Source link