Free choice must be free

Free choice must be free

Facebook recently came under fire for allegedly targeting at-risk youth in Australia. A 23-page leaked document details the power of the company’s algorithms to approximate the emotional state of users as young as 14 years old by monitoring their interactions and uploaded content. Advertisers can then learn when users feel “worthless,” “silly,” “overwhelmed,” “nervous,” “defeated,” “stressed,” “stupid,” “useless” and other moments when “young people need a confidence boost.”

Even though Facebook has rigorously denied these allegations, many have accused the company of offering a best-selling product to its advertisers — young teens in vulnerable moments to whom temporary happiness in mindless shopping would be highly appealing.

It’s hard to know at this point whether Facebook did or did not offer a manipulative targeting product to its advertisers. What matters, however, is that it could. Facebook has all the required capabilities to turn such a dystopian scenario into an entirely plausible reality. With more than two billion monthly users, a comprehensive data collection scheme, first-rate data analytics and aspirations to, literally, read our minds, Facebook can not only make inferences about personal details we did not disclose, but also become better familiar with us than we are with ourselves.

The manipulation scenario also sounds plausible because we’ve seen other companies use their massive data repositories and the users’ profiles they generated to subtly encourage a certain course of action. This is how Netflix, by automatically starting to play the next episode, nudges us to indulge in binge-watching, even when we should prioritize other tasks.

The same applies to Amazon’s customized recommendations, which attempt to prompt users into buying more stuff that they, more often than not, do not need. Similarly, Uber was incentivizing drivers to take on more rides by showing them their next fare opportunity while still at a current ride, and sending reminders as to drivers’ pre-set income goals. Facebook was also able to boost voter turnout by showing users a “go vote” message, and Google prides itself on redirecting potential ISIS recruits toward content that counters and undermines ISIS’ messages.

In a way, there is nothing new about such attempts to architect individual choices through subtle changes to their environment. As Richard Thaler and Cass Sunstein showed in their book and articles, people have always used design models to impact consumer decision-making through the presentation of choices.

Think about the way your local supermarket is organized. It makes sense that goods located at your eye level are more likely to be consumed and that the bakery should be placed at the far end of the store — because nothing will bring you there better than the smell of freshly baked pastries. Realizing the power of choice architecture, some have advocated for nudging individuals toward choices that are in their best interest, and some governments have followed suit by employing nudging methods.

It is time to have a serious conversation about the limits of choice architecture online.

 

Indeed, nudges have been used since the dawn of humanity. Moreover, because any given choice must be presented, and because the presentation is likely to influence the ultimate choice, nudges are practically unavoidable. Yet, something feels profoundly different in data-driven nudges. The limitless amount of data turns the scale of online nudges to something that we have yet to encounter.

The ability to map humanity into particularized boxes based on an ever-growing list of interests, personalities, opinions and, even undisclosed thoughts and emotions with a relatively high level of accuracy is unprecedented. And while in some mediums we expect a certain degree of nudges, like in the context of online shopping, targeted nudges in other mediums catch us completely off guard.

This is why the possibility that Facebook offers to capitalize on teenagers’ mental state invokes so much rage — when teenagers interact with their friends on Facebook, they are unlikely to recognize nudges as such. Similarly, the power of Google’s search algorithm to surreptitiously shift voting preferences of undecided voters by 20-80 percent is highly troubling — and not only because it could potentially threaten the basic principles of democracy. The power to interfere with political views is immensely alarming and so effective for the same reason — because it can be used when and where we least expect it.

In a way, nudges have been so appealing to policy makers and corporate players alike because they provide a perfect mechanism for directing behavior while respecting individual’s autonomy: they left room for real individual choice. But in the move from traditional nudges to data-driven nudges in fully controlled digital platforms, the autonomy-respecting aspect seems to be wearing away.

It is time to have a serious conversation about the limits of choice architecture online. We should acknowledge legitimate business interests alongside the long-established practice of choice architecture. At the same time, we should put restrictions in place to prohibit the use of nudges when they act as mere exploitation of weaknesses for the sake of profit-making. Rules should detail who can be part of the choosing game, and who is off-limits.

Children, the elderly and people with mental and cognitive disabilities must be protected against manipulative nudges. Additional classifications, such as individuals who have recently experienced the loss of a loved one, should also be considered. Data-driven nudges, and especially interactive nudges that correspond to changes in individuals’ mental state, must be curbed to reflect the original quality of the nudge — guiding behavior while safeguarding individual autonomy. Whenever the latter is lost, the former should not be pursued.

Source link

Leave a Reply

Your email address will not be published.