Take YouTube’s Dangers Seriously – The New York Times

This article is part of the On Tech newsletter. You can sign up here to receive it weekdays.

My colleague Kevin Roose excels at explaining how our behavior is shaped by the companies behind our favorite online hangouts.

In the first episode of Kevin’s new audio series, called “Rabbit Hole,” he tells us how Caleb Cain, a college dropout in West Virginia, found himself watching ever more extreme YouTube videos. Caleb said he started to believe the racism, misogyny and conspiracy theories he absorbed.

People believe in fringe ideas for complex reasons. But Kevin points some blame at YouTube and its feature that recommends one video after another. This can push people from relatively mainstream videos toward dangerous ideas.

Our conversation about this, and more:

Aren’t most of us on YouTube for cooking videos and kittens, not conspiracies?

Kevin: People watch more than a billion hours of YouTube videos daily. While we can’t know how much of that is disturbing or dangerous, it’s inevitably a huge amount. And for a long time, people like Alex Jones and propaganda networks like RT had millions of subscribers and hundreds of millions of views.

How much blame does YouTube deserve for people like Caleb developing extreme views?

It’s a hard question. When someone gets drawn into an extremist rabbit hole on YouTube, it’s often because of loneliness, economic conditions and the “alternative influence network” of people who spread these ideas by, essentially, being good at YouTube.

But YouTube bears responsibility. Part of what makes YouTube seductive — and successful as a business! — are its automated recommendations, and its function that starts playing the next video after you finish one. That software plays a huge role in what people watch.

If someone goes to a library, checks out “Mein Kampf” and becomes a neo-Nazi, that’s not the library’s fault. If there’s a robot librarian who greets them at the front door, steers them to the German history section and puts “Mein Kampf” in front of them. …

Oof. Do you think it would help if YouTube turned off video recommendations?

I do.

What do we collectively do?

We need to decrease the influence these platforms have over us. For me, removing automated features — turning off autoplay on YouTube, making my own Spotify playlists, making it so Alexa doesn’t automatically choose the dog food brand I buy — helps me feel more in control.

And we journalists at big news organizations can help by figuring out how to make true, factual information as appealing to people on YouTube as conspiracy theories.

When people who are radicalized online commit crimes, or Alexa leads us to buy a certain pet food, are these our choices? Or is the internet warping us?

Both! The French researcher Camille Roth writes that the algorithms powering websites like YouTube and Facebook come in two flavors: “read our minds” and “change our minds.” If we’re aware of the machines working on us, and feel the ways they’re steering our choices, we can decide whether we want to follow a recommendation or make a different decision.

Get this newsletter in your inbox every weekday; please sign up here.


Brian X. Chen, our personal tech columnist, offers this guidance on digital scams:

Even in a pandemic, scammers are still trying to get your money. Fraudsters are posing as the World Health Organization.

The scams mostly involve messages sent by email or WhatsApp that request personal information or donations, according to a warning recently posted online by the W.H.O. Some messages try to trick people into clicking on malicious links or downloading files; this can reveal passwords or compromise our devices.

What to do? Don’t click on links or open files sent to you from unknown sources. If you spot one of these scams, the W.H.O. suggests reporting it on its website.

For reliable information about the coronavirus, visit the W.H.O. website and continue reading coverage from The New York Times and other trusted news outlets.


  • “A new way of life conducted amid an unseen alien intelligence.” This story in The Atlantic is a terrific explanation of how Facebook’s data-collection and advertising systems work. And as Kevin discussed about YouTube, Facebook’s automated systems are shaping people’s behaviors in ways even the company can’t predict.

  • Dividing people, with ulterior motives: The protests against state shelter-in-place orders are being coordinated by a handful of provocateurs on Facebook, The Washington Post reported. Charlie Warzel, the Times Opinion writer, said the coronavirus is a perfect subject for online opportunists who “instill a deep distrust in all authority, while promoting a seductive, conspiratorial alternate reality.”

  • Signs of trouble long before Zoombombing: Zoom, the suddenly popular video-calling app, says it was caught off guard by trolls breaking into people’s meetings and by newly identified security flaws. But years ago, some businesses that used Zoom flagged these risks and tried to get the company to fix them, my colleagues Natasha Singer and Nicole Perlroth report.

“I hate this house!!” Oh yes, we are all this cranky child.


We want to hear from you. Tell us what you think of this newsletter and what else you’d like us to explore. You can reach us at ontech@nytimes.com.

Get this newsletter in your inbox every weekday; please sign up here.


Source link

Leave a Reply

Your email address will not be published. Required fields are marked *