书城英文图书Your Happiness Was Hacked
20556300000005

第5章 How Technology Removes Our Choices

The Tricks and Tactics Tech Uses to Control Our Actions and Stoke Addictions

If you use Google to search for “Italian restaurant,” you are likely to see a small box at the top of the screen with a few results below a map. The positioning is significant: viewers are significantly more likely to click on those results than on anything else on the page, much as shoppers are more likely to pick up products from shelves at eye level in supermarkets than from higher and lower shelves.[1],[2] But whereas in the physical world this limitation primarily affects our shopping experience, in the online and technology worlds, this algorithmic and sometimes intentional selection affects every subsequent thing that we see or do on that page—and far beyond it. The menu is the interface that controls the manner of engagement and sets limits on it, and the way menus are layered can radically alter the way we behave with technology.

For example, on iPhones Apple has an important—to Alex, critical—feature: the toggle that wipes in-app advertising identifiers that app makers can use to analyze and track users. Unfortunately, Apple places that feature deep in the menu: three layers deep. As a result, few people use it, even though regularly using the feature might significantly benefit their privacy by making it much harder for companies to track their behavior in smartphone apps. (The industry would say that using it would lead people to have less personalized and less useful experiences, which is certainly true; there is always a trade-off.)

Apple has in general taken a strong leadership position in protecting the privacy of its customers—by minimizing storage of customer data and by designing systems such as Apple Pay to present fewer opportunities for third parties to access and potentially intercept those data. But its placement of that single toggle deep in the weeds on the iPhone illustrates how decisions by product makers influence our freedom of choice and our relationship with technology. By clearing that identifier regularly, phone users would wipe away some of the capabilities of application developers to accurately target and personalize in-product offers, e-mails, and other entreaties to further guide or limit our choices and set the agenda for us.

Another example is the ability to set notifications in the iPhone. Apple does not allow us to make global changes to all the notification settings of our apps. This means we must go through, app by app, and set notification settings. Sure, we can turn them all off by putting our device in “Do Not Disturb” mode. But that is a clumsy fix. Apple's menu design for managing notifications reduces our choices and not necessarily to our advantage (which seems odd from Apple, a company that has become dominant precisely by simplifying technology).

As a number of thinkers in this field, led by former Google design ethicist Tristan Harris, explain, menus also frame our view of the world. A menu that shows our “most important” e-mails becomes a list of the people we have corresponded with most often recently rather than of those who are most important to us. A message that asks “Who wants to meet for brunch tomorrow?” goes out to the most recent group of people we have sent a group text to, or to preset groups of friends, effectively locking in these groups and locking out new people we have met. On the set of potential responses to e-mail that Google automatically suggests in its Inbox e-mail program, we have yet to see “Pick up the phone and call this person” as an option, even if, after a heated e-mail exchange, a call or a face-to-face conversation may well be the best way to communicate and to smooth the waters.

A feed of world news becomes a list built by a nameless, faceless algorithm of topics and events the system decides interest us. It limits our choice by confining it to options within a set of patterns deriving from our past consumption history, and this may or may not relate to our immediate needs or interests. Unfortunately, no one has yet developed an effective algorithm for serendipity.

From the start of the day, a feed of what we missed on Facebook or Twitter as we slept presents us with a menu of comparisons that stokes our fear of missing out (FOMO). This is so by design. However benign its intent, its effect is to significantly limit our frames of reference and our thinking.

A Slot Machine in Our Pocket

In May 2016, Tristan Harris published an influential essay titled “How technology is highjacking your mind—from a magician and Google design ethicist,” describing the many ways by which smartphones suck people into their vortex and demand constant attention. Harris traced the lineage of (both inadvertent and intentional) manipulation common in the design of technology products directly to the numerous techniques that slot-machine designers use to entice gamblers to sit for hours losing money.[3]

Inspired by Harris and other advocates of more-mindful technology product design, a small but growing Silicon Valley movement in behavioral design is advocating greater consideration of the ethics and the human outcomes of technology consumption. (After leaving Google, Harris launched a website, Time Well Spent, that focuses on helping people build healthier interactions with technology.)

Harris, New York University marketing professor Adam Alter, and others have criticized the various techniques that product designers are using to encourage us to consume ever more technology even to our own clear detriment. Tightly controlling menus to direct our attention is one common technique (one that is not as easily available to offline businesses). For his part, Harris suggests that we ask four questions whenever we're presented with online menus: (1) What's not on the menu? (2) Why am I being given these options and not others? (3) Do I know the menu provider's goals? (4) Is this menu empowering for my original need, or are the choices actually a distraction? We assure you, once you start asking these questions, you will never look at the Internet or at software applications in the same light again!

Another technique, alluded to in the title of Harris's slot-machine article, is the use of intermittent variable rewards: unpredictability in the rewards of an interaction. The first behaviorist, psychologist B. F. Skinner, introduced this concept with his “Skinner box” research.[4] Skinner put rats into boxes and taught them to push levers to receive a food pellet. The rats learned the connection between behavior and reward quickly, in only a few tries. With further research, Skinner learned that the best way to keep the rats motivated to press the lever repeatedly was to reward them with a pellet only some of the time—to give intermittent variable rewards. Otherwise, the rats pushed the lever only when they were hungry.

The casinos took the concept of the Skinner box and raised it to a fine art, designing multiple forms of variable rewards into the modern computerized versions of slot machines. Those machines now take in 70 to 80 percent of casino profits (or, according to an industry official, even 85 percent).[5],[6] Players not only receive payouts at seemingly random intervals but also receive partial payouts that feel like a win even if the player in fact loses money over all on a turn. With the newer video slots, players can place dozens of bets on the repetition of a screen icon in various directions and in varying sequence lengths.

Older mechanical slot machines displayed three reels and one line. Newer video slot machines display digital icon grids of five by five or more. This allows for many more types of bets and multiple bets in the same turn. For example, the player can bet on how many times the same icon will appear in a single row, how many times it will appear on a diagonal, and how many times it will appear in a screen full of icons, all in one turn. This allows players to win one or more small bets during a turn and gain the thrill of victory, albeit that in aggregate they lost money on their collective bets for the turn. The brain's pleasure centers do not distinguish well between actual winning and the techniques that researchers call losses disguised as wins (LDW).[7] The machines are also programmed to highlight near misses (nearly enough of the right numbers), since near misses actually stimulate the same neurons as real wins do.[8]

Machine designers use myriad other clever sensory tricks—both visual and auditory—to stimulate our neurons in ways that encourage more playing. As explained in a 2014 article in The Conversation, “Losses disguised as wins, the science behind casino profits,”

Special symbols might be placed on the reels that provide 10 free spins whenever three appear anywhere within the game screen. These symbols will often make a special sound, such as a loud thud when they land; and if two symbols land, many games will begin to play fast tempo music, display flashing lights around the remaining reels, and accelerate the rate of spin to enhance the saliency of the event. When you win these sorts of outcomes you feel as though you have won a jackpot; after all, 10 free spins is 10x the chances to win big money right? The reality is that those 10 free spins do not change the already small probability of winning on any given spin and are still likely to result in a loss of money. For many games, features such as this have entirely replaced standard jackpots.[9]

What helps these techniques entice humans to keep playing is that our brains are hard wired to become easily addicted to variable rewards. This makes sense when you think that finding food in prehistoric, pre-agricultural times was a perfect example of intermittent variable rewards. According to research by Robert Breen, video-based gambling games (of which slots represent the majority) that rely on intermittent variable rewards result in gambling addiction three to four times faster than does betting on card games or sporting events.[10]

Smartphones were not explicitly designed to behave like slot machines, but their effect is nearly the same. As Harris writes,

When we pull our phone out of our pocket, we're playing a slot machine to see what notifications we got. When we pull to refresh our email, we're playing a slot machine to see what new email we got. When we swipe down our finger to scroll the Instagram feed, we're playing a slot machine to see what photo comes next. When we swipe faces left/right on dating apps like Tinder, we're playing a slot machine to see if we got a match. When we tap the [red badge showing us the number of notifications in an app], we're playing a slot machine to [see] what's underneath.[11]

Through this lens we can see how many actions deeply embedded in the technology we use are acting as variable rewards systems, and when we look at the technology in our lives, we can find intermittent variable rewards in nearly every product, system, or device. Embedded in everything from e-mail to social media to chat systems to Q&A sites such as Quora, this reward structure is omnipresent and not easy for us to control without going to extremes and without constant vigilance.

The Empty Vessel of Social Approval

When you post your first picture on Instagram, the application automatically contacts your friends who are already on Instagram and asks them to give you some “love.” This is to encourage you to use the app more often and to get you hooked on social approval. It is a well-known product-design tactic in social networks and other consumer products. Both Twitter and Facebook encourage new users to immediately follow or connect with others they may already know in order to ensure that their feeds fill sufficiently to attract steady interest and to create a feedback loop of intermittent variable rewards. Sending some love seems rather innocuous, and the request is clearly not malicious in intent. But a little too much love can be bad for your soul when that love is empty and demand for it arises from a hedonic treadmill of empty accumulation rather than from real social relationships and personal recognition.

We all need and compete for social approval at some level, from our families, our friends, and our colleagues. Even if we intentionally try to avoid seeking it, the social-media software and hardware and their mass penetration via the Internet have led social competition to occupy considerable portions of our devices, our time, and our thoughts. Teens posting messages on the popular photo-sharing site Instagram worry acutely about how many likes and comments they will receive. To members of Instagram, followers are social currency. In Snapchat, teens compete to maintain “Snapstreaks”—consecutive days of mutual messaging—with friends. On Facebook, the number of likes on a post or the number of messages you get on your birthday becomes a measure of your personal self-worth. On Twitter, journalists and intellectuals compete for retweets and “hearts.” On LinkedIn, we check to see who has viewed our profile, and the application provides us with weekly stats on the increase (as a percentage or an absolute number) in the number of people who have checked us out.

To be fair, some evidence exists that active participation in social networks leads people to feel more connected.[12] Facebook claims that chatting with friends and family, sharing pictures, and other positive interactions don't make people sad, although it concedes that negative comparisons can lead to less happiness.[13] Certain personality types, it appears, can better control the craving for constant likes and approvals, and suffer less from the inevitable comparisons with those who are more popular.

But, in general, jealous comparisons kill joy, and technology has driven us to compare ourselves with others on the most superficial of measures.[14] Furthermore, recent research on social-media use has found that it is the comparisons, which are unavoidable in social media, that contribute most to making users unhappy.[15] Teenagers appear to be particularly vulnerable to this; being excluded or unloved on social media is one of the worst humiliations a high-schooler can suffer.[16] Heavy social-media use has been linked to unhappy relationships and higher divorce rates.[17] That may follow from social media's encouragement of social comparisons and self-objectification, which tend to lower self-esteem, reduce mental health, and inculcate body shame.[18] Quitting social media has been linked to marked increases in well-being.[19]

This behavior of seeking likes and approvals also relates directly to intermittent variable rewards: the slot machine in our pockets and on our tablets and laptops. Not knowing how many likes you will get or when they will roll in, you check your social-media accounts frequently. And limits on choice and control compound the active promotion of destructive behaviors to escalate users into borderline obsessiveness.

The Bottomless Well

It's 11 p.m. on a weeknight, and you reach the end of the first episode of the latest season of Stranger Things on Netflix. It's late, and you know you should go to sleep. You have to be up in eight hours to go to work, and you need your rest. But before you can close the application, the next episode begins to play. Netflix has conveniently loaded that episode in the background, anticipating your desire to continue following the story. And then, almost against your will, you are watching the next episode even if you intended not to. Oh well, you figure, I can make up sleep on the weekend.

Along with the millions of others watching Netflix at that precise instant, you have just been sucked into the bottomless well of consumption. Netflix has teams of PhD data scientists who work to figure out how to get you to watch more movies. As you watch Netflix, they watch you, tracking your behavior in minute detail. They track when you pause, rewind, or fast-forward; the days of the week when you tend to watch; the times of day when you watch; where you watch (by zip code); what device you watch on; the content you watch; how long you pause for (and whether you return); whether you rate content; how often you search content; and how you browse and scroll—to name just a few parameters. Truly, they are watching you watching them!

So it's hardly surprising that Netflix figured out that starting the next episode without even asking you would entice you to consume far more content. They noticed that some users were binge-watching and decided that automatically activating the next episode might be a good feature. Netflix launched “Post-Play,” as the feature is called, in 2012. Other video-hosting companies quickly followed suit. It got so bad that Apple built a feature into Safari that blocks auto-play videos on webpages and, in January 2018, Google made this a feature in its Chrome browser! So how much more do we consume when facing a bottomless pit of content? Real data on that aren't publicly available yet (although Netflix, YouTube, and Facebook certainly have them), but clues to the soaring amount of user time that Netflix, YouTube, and Facebook videos occupy are available in research and surveys. A 2017 report that surveyed 37,000 consumers found that Netflix binge-watching had become “the new normal,” with 37% of binge-watchers actually partaking in their pastime at work! [20]

Since Netflix launched the feature, every other major streaming video provider has taken advantage of the overconsumption that follows from automatic availability. Netflix, Hulu, YouTube, and HBO all have bottomless wells set up on their video applications. The lesson has not been lost on traditional online publications, either. Most media sites now offer suggested reading links at the ends of articles and in sidebars as well as highlighting “most popular,” “most shared,” and “most e-mailed” articles. Many of them, mirroring Facebook, Instagram, and Twitter, now have scrolling pages that cause each article to roll into the next without requiring a click. The goal is to boost consumption, at nearly any cost, even that of fostering a consumer's destructive behaviors. In effect, every digital company wants us to binge-watch everything, all the time. Our value to it has been reduced to the amount of time we spend in an application watching a video or playing a game.

This is hardly the first time that for-profit businesses have sought to induce addictive behavior. The soft-drink companies such as Coca-Cola, the tobacco companies, fast-food chains, and convenience stores such as 7–Eleven all focus on building repeatable habits for reliable longterm consumption of their products. They have done this largely without real concern for the impact on the user's or consumer's well-being. To those whose paramount concern is profit, such disregard makes perfect sense. Why would they suggest that those constantly tapping a screen to place more bets (literal or figurative) consider the impact of their actions on their families, their finances, and their health? But most of the large tech companies stake a claim not to operate in such a vacuum: they claim to be doing what they are doing in part to promote the betterment of humankind.

True, Coca-Cola, PepsiCo, and other companies peddling addictive products also have lofty mission statements. But society doesn't take their mission statements seriously, and neither do they truly have the potential to better humankind except in underwriting charitable efforts, as Coke will never announce that due to the link of sugary drinks with diabetes it will cease selling those drinks. In contrast, Facebook, Twitter, and other social-network tools do have a unique potential to effect positive change; witness the impact of Twitter carrying the message of the Arab Spring movement, and the use of Facebook as a means of recruiting subjects for trials of experimental drugs, a significantly cheaper technique than the traditional recruitment methods.[21]

Another key way in which the online and Internet giants differ from the others lies in their ubiquity—and therefore their power—in our lives. No one spends nine hours a day eating McDonald's or hanging out in 7–Elevens. You may carry a soda or a cup of coffee for several hours in a day, but you don't usually sleep next to it or take a swig of it in the middle of the night when you awake. You don't conveniently carry those experiences everywhere in your pocket and mount them on your dashboard. You don't totally freak out if you don't know where your soda is! The only exception we can think of is tobacco products. But even the most deeply addicted cigarette smoker can go for an hour or two without lighting up, whereas normal people who have a healthy relationship with online tools rarely go a full two hours during a working day without logging in, checking e-mail, or undertaking some form of social activity on line.

Equally troubling, recent research has associated bingewatching with sleep disorders.[22] Netflix CEO Reid Hastings stated, half in jest, that the company's primary competition is sleep, perhaps not realizing the truth in his words.[23] We return to the effects of media technology on our sleep in chapter 6.

So large technology companies' decisions to default us to the bottomless pit of content show that they may not have our own best interests in mind. To be fair, Facebook, Netflix, Hulu, and YouTube all allow users to turn off this auto-play feature (though apparently HBO Now does not). But wouldn't it be better for everyone if people could opt into the feature rather than encounter it and have to opt out? A simple Play Next Episode button works almost as well. And when we want to opt out of video auto-play on Facebook, arriving at the right setting takes a few not necessarily intuitive steps. This naturally discourages people from turning the feature off.

This may seem a paternalistic suggestion, but making such repetition an opt-in feature would give users a chance to make a more conscious decision before they are trained to expect auto-play. In pausing, we temporarily break a pattern, returning decision-making to our conscious minds and establishing a fresh opportunity to sidestep or counter our addictive behaviors. And the rarity with which tech and application vendors allow users to opt in rather than opt out—or even to pause—puts the lie to any claims of innocence. They know that far fewer users would consciously decide to drink repeatedly from the bottomless well; and profit maintenance takes precedence over user choice.

FOMO: The Gnawing Fear That We Are Missing Something Important

Fifty years ago, when we left the office or the job, we heard from our managers or employees only if there was a real emergency. Such communication would take the shape of a phone call. Today, notification inflation is part of every job. During an eight-hour workday, on average we check our e-mails nine times an hour.[24] We send texts to update our progress while we're in transit to the office or to let people know when we'll emerge from a meeting. Each of those notifications that we send in turn demands attention from its recipients. How many of those interruptions are necessary or even helpful? Probably fewer than 5 percent of them.

But these notifications are perceived as exceptionally valuable by the companies that make communication tools for work. For example, Slack was the fastest-growing business chat tool in 2017. It was worth more than $5 billion as of July 2017.[25] It looks a lot like nearly every other chat tool ever made, going back to IRC (Internet Relay Chat), but Slack uses numerous tricks to hook users and entice them to spend more time using the application.

In fact, the company is so convinced that constant notifications are a positive feature that its product designers resort to scare tactics should a user wish to turn them off. To ensure that Slack users buy into all this notification noise, Slack presents a stark warning when someone decides not to enable desktop notifications of Slack conversations: “Desktop notifications are currently disabled. We strongly recommend enabling them.” Slack would probably counter that its users can turn on Do Not Disturb mode inside the app whenever they wish to concentrate, but that very argument implies that interruption as a default state is optimal. We beg to differ: interruption as a default state appears to be miserable, unproductive, and bad for our health.

On top of notification inflation, then, we have built a culture of FOMO: fear of missing out. We check our e-mail first thing in the morning to see what happened while we were sleeping. This fills our brain with unnecessary conversations during its otherwise most productive and creative time, the morning. (That would be a lesser problem if the average e-mail message were more useful.) Productivity gurus such as Tim Ferris and Cal Newport intentionally avoid answering e-mails or texts until after they have completed their most important tasks of the day. This makes perfect sense when we consider how often we check e-mail. University of California Irvine researcher Gloria Mark and colleagues found that workers check e-mail an average of seventy-seven times a day—and that checking e-mail constantly tends to increase worker frustration and stress.[26] If we had checked our e-mails seventy-seven times on the days when we were writing this book, we would never have finished writing the book!

We keep people on as Facebook friends even though we don't really want to, because we are afraid that we might miss out on something that people in our high-school class are doing, saying, or experiencing. We refrain from unfollowing people on Twitter because they might notice and take offense. Yet we keep those same people unmuted in our feed just in case they post something interesting. We use tools such as Nuzzel to save time by giving us a newsfeed of everything that our friends are reading (or at least posting on Twitter), although this also means we have more to read and are less focused in our reading.

And we spend time on Facebook Messenger or Whats-App chatting about things that have little to do with our work, to see what we've missed out on around the virtual office watercooler. In the tech world, Slack is very popular. The neighborhood version of Slack is NextDoor. On NextDoor, neighbors connect in useful ways to share information and to chat, but they also spend many hours in vitriolic arguments over whether dogs should be leashed in the park or whether it's okay to light a wood-fired stove in the winter. NextDoor, too, strongly encourages accepting notifications.

In our use of every screen device, and on nearly every app and website, some kind of Do Not Disturb function exists: on our laptop or phone, there are options to control notifications; in the various applications themselves, there are notification options; and of course there is the on/off switch. But somehow we rarely use them. And many work environments have unspoken understandings that a worker must respond to any e-mail, text, or chat from a superior within a certain period or face unpleasant consequences. Being labelled “unresponsive” and “not a team player” is often the code phrasing for someone who prefers to focus on his or her work rather than constantly monitor e-mail and chat messages in order to respond to superiors or colleagues.

Forcing Us to Follow Their Agenda to Reach Our Agenda

Tristan Harris discusses how technology companies set our agendas for us by mirroring and magnifying brick-and-mortar stores' strategies for influencing shoppers. For example, grocery stores put the most popular products—milk and prescriptions—at the back of the store in order to draw shoppers past as many products as possible, and they put things such as produce and deli and dairy displays along the outer walls to encourage shoppers to circle the stores.

Tech companies place similar distractions in the way of their own customers. Facebook, for instance, routes people through the newsfeed before they can see an event they are interested in. Naturally, we get distracted by our newsfeed because there is always something new there. This results in further consumption of Facebook but slower progress toward our original goal (checking out an event).

Of course, whenever we use a free service, such as most of the social networks, bending users to the company's agenda to increase consumption of advertising is part of the price of entry. We all know and understand that. But maybe we would prefer a paid option with a direct-access option for key tasks and screens? Or maybe there's a better way to help us get directly to our intended destination. These are wishful and wistful questions. We have no illusions that such options will be forthcoming, as they would enable us to reduce our time in the application and redirect our attention for a few seconds or minutes per month, to the chagrin of shareholders and the cadres of mathematicians and computer scientists whose primary job it is to get us to click on ads. To be fair, Facebook announced in January 2018 that it would switch its algorithms to show in the newsfeed far more news from friends and family. But it remains unclear whether that also includes news articles or just personal updates. Alex, for one, has relatives with strong political views that oppose his own, and he would rather not see their postings of hyperbolic (and sometimes fake) news articles.

Tristan Harris dreams of a digital bill of rights that would mandate direct access: “Imagine a digital ‘bill of rights’ outlining design standards that forced the products used by billions of people to let them navigate directly to what they want without needing to go through intentionally placed distractions.”[27] Though Harris has long received support in his quest from the mindfulness and productivity communities, he is now receiving support from unexpected quarters: hedge funds and employee-pension funds. Jana Partners, a multibillion-dollar activist hedge fund; and the California State Teachers' Retirement System (CalSTRS), one of the largest public employee pension funds in the U.S., sent a letter to Apple CEO Tim Cook asking the company to consider how iPhones and iPads impact the well-being of children.[28] A digital “bill of rights,” however, remains wishful thinking in the United States. There is no clear movement to establish a bill of users' rights, even if it is a really wonderful idea on how to balance addictive product design with user choice and control. A handful of companies are trying to do this, and we'll talk about them in the final chapters as well as on the book's website (HackedHappiness.com).

By contrast, Europe has been steadily putting in place laws that are building people's online rights, step by step. The “Right to be Forgotten” gives people the right to ask online properties to remove results or information about them. Europe also mandates that any algorithm in use be explainable to humans. This sounds quixotic, but it just may have the salutary impact of forcing companies to consider that their users may have a right to understand how decisions about them are reached. Europe's data-privacy laws—Germany's being the most stringent—tend to lead U.S. laws as well in putting the burden of maintaining user privacy on the companies that collect the data and in placing real limits on the kinds of data they can collect and under what circumstances they can collect it.

Exploiting People's Inability to Forecast Time Spent

How often have you clicked on a notification to check what caused the red bubble to pop up and learned that someone has tagged you in a picture, only to look up thirty minutes later to realize you've been aimlessly browsing through the photos of online friends? This is the digital equivalent of a classic sales technique: “Can I ask for a minute of your time?” It relies on a deep feature—some might say defect—of our basic mental functions.

In almost every activity, humans underestimate how long it will take to complete a task or how likely we are to become distracted. Some services, such as Medium, try to help users manage expectations by posting reading time on each article. But, by and large, technology encourages us to dive into tasks large and small with the understanding that doing so will take just a moment, though in reality any task will absorb us for longer than we estimate. This bias is compounded by all the ways in which technology companies seek to distract us as we undertake a task, making it even harder to estimate how long it will take us to finish. Imagine how cool it would be to have an I Am in a Hurry button on a smartphone, or an application that would clear the way of distractions such as ads, inducements to click on other feed items, or other tricks deployed to drive higher engagement. (In fact, ad-blocking software is a de facto I Am in a Hurry button: the main reason users run ad-blocking is that ads slow down their online experience, according to the Internet Advertising Bureau.)[29]

The “time spent” bias is even more pronounced across the entire smartphone platform and our general use of technology. We radically underestimate the amount of time we spend with our devices—perhaps even by half. Participants in a small study in 2015 of twenty-three adults of ages from eighteen to thirty-three estimated that they spent roughly two and a half hours per day on their phones. In fact, they had on average spent more than five hours, in nearly eighty-five activities, per day.[30]

Exploiting the Availability Bias

You may remember as a child playing in your neighborhood without supervision, riding bikes, or going to the park, and then just walking or running home when it was dinnertime. Such freedom is a rarity for children today, because of parents' fears for their children's safety, doubtless affected by our endlessly scrolling newsfeeds. People tend to overestimate the likelihood that negative events will happen to them or their children. And the Internet is a giant machine for inflation of availability bias. Our newsfeeds fill our heads with horrible news from around the globe, conveniently curated (not necessarily for accuracy) by news-aggregation engines over which we all too often have little control.[31]

People are naturally attracted to catastrophic events, and the Internet plays to this attraction by making it possible for us to read about child abuse, horrible crimes, and all manner of sick behavior or dangerous events transpiring not just locally (as was formerly the case) but anywhere in the world. In the news business, as the saying goes, “If it bleeds, it leads.” On the web, this phenomenon leads to results that are far more serious. Even as the statistical likelihood of violent crime and of child abduction has (at least until 2015) steadily fallen in the United States,[32] parents have adopted ludicrous precautions, such as driving their children half a block to school or refusing to allow their children to ride their bikes in safe neighborhoods or explore the woods near their homes.[33]

This fear is related also to the affect heuristic, a feeling that comes over us momentarily as a psychological response to stimulus.[34] Psychologists believe that this human tendency explains why messages designed to activate emotions are more persuasive than other messages.[35] In other words, when we confront an emotive article in a newsfeed with a horrible headline, the article has a larger effect on our thinking, and on our belief that such horrible events are common, than if we read a drier, more clinical, or statistics-driven article presenting the same factual content.

Just a Few of the Many

These are just a few of the ways in which technology reduces our choices, offers us false choices, and persuades us to consume more than we need to. Dozens of books—from the Dale Carnegie classic How to Win Friends and Influence People, to the catalogue of thirty-three psychological tricks of modern advertisers, Hidden Persuasion, by Marc Andrews and Matthijs ven Leeuwen, to Natasha Dow Schüll's Addiction by Design, an exposé of the incredibly detailed deliberations of Las Vegas casinos—have covered the myriad methodologies, proven and unproven, that are used to influence our behavior.[36] In relation to modern technology, these efforts trace back most prominently to the writings and teachings of a respected and quietly famous Stanford University professor named B. J. Fogg.

注释:

[1]Jessica Lee, “No. 1 position in Google gets 33% of search traffic [study],” Search Engine Watch 11 February 2018, (accessed 2 February 2018).

[2]“The way the brain buys,” The Economist 18 December 2008, (accessed 2 February 2018).

[3]Tristan Harris, “How technology is hijacking your mind—from a magician and Google design ethicist,” Thrive Global 18 May 2016, journal.thriveglobal.com/how-technology-hijacks-peoples-minds-from-a-magician-and-google-s-design-ethicist-56d62ef5edf3 (accessed 2 February 2018).

[4]Charles B. Ferster and B. F. Skinner, Schedules of Reinforcement, New York: Appleton-Century-Crofts, 1957.

[5]Andrew Thompson, “Engineers of addiction,” The Verge 6 May 2015, (accessed 2 February 2018).

[6]Brad Plumer, “Slot-machine science: How casinos get you to spend more money,” Vox 1 Mar 2015, (accessed 2 February 2018).

[7]Candice Graydon, Mike J. Dixon, Kevin A. Harrigan, et al., “Losses disguised as wins in multiline slots: using an educational animation to reduce erroneous win overestimates,” International Gambling Studies 2017;17:442–458, (accessed 2 February 2018).

[8]Luke Clark, Andrew J. Lawrence, Frances Astley-Jones, et al., “Gambling near-misses enhance motivation to gamble and recruit win-related brain circuitry,” Neuron 2009;61(3):481–490, (accessed 2 February 2018).

[9]Mark R. Dixon and Jacob Daar, “Losses disguised as wins, the science behind casino profits,” The Conversation 3 November 2014, (accessed 2 February 2018).

[10]Robert B. Breen and Mark Zimmerman, “Rapid onset of pathological gambling in machine gamblers,” Journal of Gambling Studies 2002;18(1):31–43, (accessed 2 February 2018). Mike J. Dixon, Kevin A. Harrigan, Rajwant Sandhu, et al., “Losses disguised as wins in modern multiline video slot machines,” Addiction 2010;105(10):1819–1824, (accessed 2 February 2018). “Congratulations, you've lost! How slot machines disguise losses as wins,” Freakonomics 1 September 2011, (accessed 2 February 2018). Alice Robb, “Why are slot machines so addictive?” New Republic 5 December 2013, (accessed 2 February 2018). Brad Plumer, “Slot-machine science: How casinos get you to spend more money,” Vox 1 Mar 2015, (accessed 2 February 2018). Candice Graydon, Mike J. Dixon, Kevin A. Harrigan, et al., “Losses disguised as wins in multiline slots: Using an educational animation to reduce erroneous win overestimates.” K. R. Barton, Y. Yazdani, N. Ayer, et al., “The effect of losses disguised as wins and near misses in electronic gaming machines: A systematic review,” Journal of Gambling Studies 2017;33:1241–1260, (accessed 2 February 2018).

[11]Tristan Harris, “How technology is hijacking your mind—from a magician and Google design ethicist.”

[12]Keith Hampton, Lauren Sessions Goulet, Eun Ja Her, et al., Social Isolation and New Technology: How the Internet and Mobile Phones Impact Americans' Social Networks, Washington, DC: Pew Research Center, 2009, (accessed 2 February 2018).

[13]David Ginsbert and Moira Burke, “Hard questions: Is spending time on social media bad for us?” Facebook Newsroom 15 Dec 2017, (accessed 2 February 2018).

[14]Eli J. Finkel, Paul W. Eastwick, Benjamin R. Karney, et al., “Online dating: A critical analysis from the perspective of psychological science,” Psychological Science in the Public Interest 2012;13(1):3–66, (accessed 2 February 2018). Hui-Tzu Grace Chou and Nicholas Edge, “‘They are happier and having better lives than I am’: The impact of using Facebook on perceptions of others' lives,” Cyberpsychology, Behavior, and Social Networking 2012;15(2):117–121, (accessed 2 February 2018). Sonja Lyubomirsky and Lee Ross, “Hedonic consequences of social comparison: A contrast of happy and unhappy people,” Journal of Personality and Social Psychology 1997;73(6):1141–1157, (accessed 2 February 2018).

[15]Emily Hanna, L. Monique Ward, Rita C. Seabrook, et al., “Contributions of social comparison and self-objectification in mediating associations between Facebook use and emergent adults' Psychological Well-Being,” Cyberpsychology, Behavior, and Social Networking 2017;20(3)172–179, (accessed 2 February 2018). Helmut Appel, Alexander L. Gerlach, and Jan Crusius, “The interplay between Facebook use, social comparison, envy, and depression,” Current Opinion in Psychology 2016 June;9:44–49, (accessed 2 February 2018).

[16]Allee Manning, “Teens are crippled by social media-fueled FOMO,” Vocativ 16 June 2016, (accessed 2 February 2018).

[17]Sebastián Valenzuela, Daniel Halpern, and James E.Katz, “Social network sites, marriage well-being and divorce: Survey and state-level evidence from the United States,” Computers in Human Behavior 2014;36:94–101, (accessed 2 February 2018).

[18]Emily Hanna, L. Monique Ward, Rita C. Seabrook, et al., “Contributions of social comparison and self-objectification in mediating associations between Facebook use and emergent adults' psychological well-being,” Cyberpsychology, Behavior, and Social Networking 2017;20(3):172–179, (accessed 2 February 2018).

[19]Holly B. Shakya and Nicholas A. Christakis, “Association of Facebook use with compromised well-being: A longitudinal study,” American Journal of Epidemiology 2017;185(3):203–211, (accessed 2 February 2018).

[20]Proma Khosla, “Study reveals how often we laugh, cry, and get creeped on while watching Netflix in public,” Mashable Australia 14 November 2017, (accessed 22 March 2018).

[21]Virginia Lau, “The Michael J. Fox Foundation uses Facebook to recruit Ashkenazi Jews for Parkinson's study,” MM&M 22 Sep 2016, (accessed 2 February 2018).

[22]Liese Exelmans and Jan Van den Buick, “Binge viewing, sleep, and the role of pre-sleep arousal,” Journal of Clinical Sleep Medicine 2017;13(8):1001–1008, (accessed 2 February 2018).

[23]Alex Hern, “Netflix's biggest competitor? Sleep,” The Guardian 18 April 2017, (accessed 2 February 2018).

[24]Gloria Mark, Shamsi T. Iqbal, Mary Czerwinski, et al., “Email duration, batching and self-interruption: Patterns of email use on productivity and stress,” in Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, New York: ACM, 2016, (accessed 2 February 2018).

[25]Andrew Nusca, “Slack raises $250 million; tops $5 billion variation,” Fortune 18 September 2017, (accessed 2 February 2018).

[26]Gloria Mark, Shamsi T. Iqbal, Mary Czerwinski, et al., “Email duration, batching and self-interruption: Patterns of email use on productivity and stress.”

[27]Tristan Harris, “How technology is hijacking your mind—From a magician and Google design ethicist.”

[28]Jenny Anderson, “A letter from two big Apple investors powerfully summarizes how smartphones mess with kids' brains,” Quartz 8 Jan 2018, (accessed 2 February 2018).

[29]“IAB ad blocking report: Who blocks ads, why, and how to win them back,” IAB 26 July 2016, (accessed 2 February 2018).

[30]Sally Andrews, David A. Ellis, Heather Shaw, et al., “Beyond self-report: Tools to compare estimated and real-world smartphone use,” PLoS ONE 2015;10(10):e0139004, (accessed 2 February 2018).

[31]Glen Fleishman, “‘Stranger Danger’ to children vastly overstated,” BoingBoing 24 Feb 2015, (accessed 2 February 2018).

[32]Hanna Rosin, “The overprotected kid,” The Atlantic April 2014, (accessed 2 February 2018). Victoria Rideout, Children, Teens, and Reading, San Francisco: Common Sense Media, 2014, (accessed 2 February 2018). News and America's Kids: How Young People Perceive and Are Impacted by the News, San Francisco: Common Sense Media, 2017, (accessed 2 February 2018).

[33]Ashley J. Thomas, P. Kyle Stanford, and Barbara W. Sarnecka, “No child left alone: Moral judgments about parents affect estimates of risk to children,” Collabra 2016;2(1):10, (accessed 2 February 2018). Ashley J. Thomas, P. Kyle Stanford, and Barbara W. Sarnecka, “Correction: No child left alone: Moral judgments about parents affect estimates of risk to children,” Collabra 2016;2(1):12, (accessed 2 February 2018).

[34]Melissa L. Finucane, Ali Alhakami, Paul Slovic, et al., “The affect heuristic in judgements of risks and benefits,” Journal of Behavioral Decision Making 2000;13:1–17, (accessed 2 February 2018).

[35]Carmen Keller, Michael Siegrist, and Heinz Gutscher, “The role of the affect and availability heuristics in risk communication,” Risk Analysis 2006;26(3):631–639, (accessed 2 February 2018).

[36]Marc Andrews, Matthijs van Leeuwen, and Rick van Baaren, Hidden Persuasion: 33 Psychological Influence Techniques in Advertising, Amsterdam: BIS, 2013. Natasha Dow Schüll, Addiction by Design: Machine Gambling in Las Vegas, Princeton: Princeton University Press, 2014.