World

Why Facebook didn’t block live New Zealand shooting video

This combination of images shows logos for companies from left, Twitter, YouTube and Facebook. These Internet companies and others say they’re working to remove video footage filmed by a gunman in the New Zealand mosque shooting that was widely available on social media hours after the horrific attack. (AP Photos/File)

Why did Facebook air live video of the New Zealand mosque shooting for 17 minutes? Didn’t anyone alert the company while it was happening?

Facebook says no. According to its deputy general counsel, Chris Sonderby, none of the 200 or so people who watched the live video flagged it to moderators. In a Tuesday blog post, Sonderby said the first user report didn’t come until 12 minutes after the broadcast ended.

All of which raises additional questions — among them, why so many people watched without saying anything, whether Facebook relies too much on outsiders and machines to report trouble, and whether users and law enforcement officials even know how to reach Facebook with concerns about what they’re seeing on the service.

“When we see things through our phones, we imagine that they are like a television show,” said Siva Vaidhyanathan, a professor of media studies at the University of Virginia. “They are at a distance, and we have no power.”

Facebook said it removed the video “within minutes” of being notified by New Zealand police. But since then, Facebook and other social media companies have had to contend with copies posted by others.

The shooting suspect carefully modeled his attack for an internet age, as he live-streamed the killing of 50 people at two mosques in Christchurch, New Zealand.

Tim Cigelske, who teaches about social media at Marquette University in Milwaukee, said that while viewers have the same moral obligations to help as a bystander does in the physical world, people don’t necessarily know what to do.

“It’s like calling 911 in an emergency,” he said. “We had to train people and make it easy for them. You have to train people in a new way if you see an emergency happening not in person but online.”

To report live video, a user must know to click on a small set of three gray dots on the right side of the post. A user who clicks on “report live video” gets a choice of objectionable content types to select from, including violence, bullying and harassment. Users are also told to contact law enforcement if someone is in immediate danger.

Facebook also doesn’t appear to post any public information instructing law enforcement how to report dangerous or criminal video. The company does have a page titled ”information for law enforcement authorities ,” but it merely outlines procedures for making legal requests for user account records. Facebook didn’t immediately respond to a request for comment and questions about its communications with police.

Facebook uses artificial intelligence to detect objectionable material, while relying on the public to flag content that violates its standards. Those reports are then sent to human reviewers, the company said in a November video.

The video also outlined how it uses “computer vision” to detect 97 percent of graphic violence before anyone reports it. However, it’s less clear how these systems apply to Facebook’s live streaming.

Experts say live video poses unique challenges, and complaints about live streaming suicides, murders and beatings regularly come up. Nonetheless, they say Facebook cannot deflect responsibility.

“If they cannot handle the responsibility, then it’s their fault for continuing to provide that service,” said Mary Anne Franks, a law professor at the University of Miami.

She calls it “incredibly offensive and inappropriate” to pin responsibility on users subjected to traumatic video.

In some cases, it’s not clear at the outset whether a video or other post violates Facebook’s standards, especially on a service with a range of languages and cultural norms. Indecision didn’t seem to be the case here, though. Facebook simply didn’t know about it in time.

Facebook’s Sonderby said in Tuesday’s blog post that the company “designated both shootings as terror attacks, meaning that any praise, support and representation of the events” are violations.

Vaidhyanathan said Facebook’s live video feature has turned into a beast that Facebook can do little about “short of flipping the switch.” Though Facebook has hired more moderators to supplement its machine detection and user reports, “you cannot hire enough people” to police a service with 2.3 billion users.

“People will always find new ways to express hatred and instigate violence,” he said.

New Zealand Prime Minister Jacinda Ardern expressed frustration that the footage remained online four days after the massacre.

Machines can detect when users try to repost banned videos by matching patterns, or digital fingerprints, in the files. But users determined to get around these checks can make small alterations, such as tweaking the color or the video speed.

The Global Internet Forum to Counter Terrorism, a group of global internet companies led by Facebook, YouTube, Microsoft and Twitter, said it has identified 800 different versions and added them to a shared database used to block violent terrorist images and videos.

Sonderby said some variants are tough to detect and that Facebook has “expanded to additional detection systems including the use of audio technology.”

In a series of tweets a day after the shootings, Facebook’s former chief security officer, Alex Stamos, laid out the challenge for tech companies as they raced to keep up with new versions of the video.

“What you are seeing on the major platforms is the water leaking around thousands of fingers poked in a dam,” he said.

Stamos estimated the big tech companies are blocking more than 99 percent of the videos from being uploaded, “which is not enough to make it impossible to find.”

Comments

Why Facebook didn’t block live New Zealand shooting video

This combination of images shows logos for companies from left, Twitter, YouTube and Facebook. These Internet companies and others say they’re working to remove video footage filmed by a gunman in the New Zealand mosque shooting that was widely available on social media hours after the horrific attack. (AP Photos/File)

Why did Facebook air live video of the New Zealand mosque shooting for 17 minutes? Didn’t anyone alert the company while it was happening?

Facebook says no. According to its deputy general counsel, Chris Sonderby, none of the 200 or so people who watched the live video flagged it to moderators. In a Tuesday blog post, Sonderby said the first user report didn’t come until 12 minutes after the broadcast ended.

All of which raises additional questions — among them, why so many people watched without saying anything, whether Facebook relies too much on outsiders and machines to report trouble, and whether users and law enforcement officials even know how to reach Facebook with concerns about what they’re seeing on the service.

“When we see things through our phones, we imagine that they are like a television show,” said Siva Vaidhyanathan, a professor of media studies at the University of Virginia. “They are at a distance, and we have no power.”

Facebook said it removed the video “within minutes” of being notified by New Zealand police. But since then, Facebook and other social media companies have had to contend with copies posted by others.

The shooting suspect carefully modeled his attack for an internet age, as he live-streamed the killing of 50 people at two mosques in Christchurch, New Zealand.

Tim Cigelske, who teaches about social media at Marquette University in Milwaukee, said that while viewers have the same moral obligations to help as a bystander does in the physical world, people don’t necessarily know what to do.

“It’s like calling 911 in an emergency,” he said. “We had to train people and make it easy for them. You have to train people in a new way if you see an emergency happening not in person but online.”

To report live video, a user must know to click on a small set of three gray dots on the right side of the post. A user who clicks on “report live video” gets a choice of objectionable content types to select from, including violence, bullying and harassment. Users are also told to contact law enforcement if someone is in immediate danger.

Facebook also doesn’t appear to post any public information instructing law enforcement how to report dangerous or criminal video. The company does have a page titled ”information for law enforcement authorities ,” but it merely outlines procedures for making legal requests for user account records. Facebook didn’t immediately respond to a request for comment and questions about its communications with police.

Facebook uses artificial intelligence to detect objectionable material, while relying on the public to flag content that violates its standards. Those reports are then sent to human reviewers, the company said in a November video.

The video also outlined how it uses “computer vision” to detect 97 percent of graphic violence before anyone reports it. However, it’s less clear how these systems apply to Facebook’s live streaming.

Experts say live video poses unique challenges, and complaints about live streaming suicides, murders and beatings regularly come up. Nonetheless, they say Facebook cannot deflect responsibility.

“If they cannot handle the responsibility, then it’s their fault for continuing to provide that service,” said Mary Anne Franks, a law professor at the University of Miami.

She calls it “incredibly offensive and inappropriate” to pin responsibility on users subjected to traumatic video.

In some cases, it’s not clear at the outset whether a video or other post violates Facebook’s standards, especially on a service with a range of languages and cultural norms. Indecision didn’t seem to be the case here, though. Facebook simply didn’t know about it in time.

Facebook’s Sonderby said in Tuesday’s blog post that the company “designated both shootings as terror attacks, meaning that any praise, support and representation of the events” are violations.

Vaidhyanathan said Facebook’s live video feature has turned into a beast that Facebook can do little about “short of flipping the switch.” Though Facebook has hired more moderators to supplement its machine detection and user reports, “you cannot hire enough people” to police a service with 2.3 billion users.

“People will always find new ways to express hatred and instigate violence,” he said.

New Zealand Prime Minister Jacinda Ardern expressed frustration that the footage remained online four days after the massacre.

Machines can detect when users try to repost banned videos by matching patterns, or digital fingerprints, in the files. But users determined to get around these checks can make small alterations, such as tweaking the color or the video speed.

The Global Internet Forum to Counter Terrorism, a group of global internet companies led by Facebook, YouTube, Microsoft and Twitter, said it has identified 800 different versions and added them to a shared database used to block violent terrorist images and videos.

Sonderby said some variants are tough to detect and that Facebook has “expanded to additional detection systems including the use of audio technology.”

In a series of tweets a day after the shootings, Facebook’s former chief security officer, Alex Stamos, laid out the challenge for tech companies as they raced to keep up with new versions of the video.

“What you are seeing on the major platforms is the water leaking around thousands of fingers poked in a dam,” he said.

Stamos estimated the big tech companies are blocking more than 99 percent of the videos from being uploaded, “which is not enough to make it impossible to find.”

Comments

শৈত্যপ্রবাহ না থাকলেও বেড়েছে গিজার বিক্রি

এই শীতে ৩৮০ থেকে ৪০০ কোটি টাকার গিজার বিক্রি হতে পারে। প্রায় এক ডজন স্থানীয় প্রতিষ্ঠান তাদের নিজস্ব ব্র্যান্ডের গিজার বিক্রি করছে।

৪৫ মিনিট আগে