World

NZ PM receives death threats on social media

New Zealand's Prime Minister Jacinda Ardern leaves after the Friday prayers at Hagley Park outside Al-Noor mosque in Christchurch, New Zealand March 22, 2019. REUTERS/Jorge Silva

New Zealand Prime Minister Jacinda Ardern has received death threats on social media, local media reports.

Police are investigating death threats sent to Jacinda Ardern on social media, according to a report published on the NZ Herald.

A Twitter post containing a photo of a gun and captioned "You are next" was sent to the Prime Minister.

The Herald understands that the post had been up for more than 48 hours before the sender's Twitter account was suspended before 4pm after it was reported by various people.

Another post tagged to Ardern and NZ Police and had the same photo with "next it's you".

The suspended account contained anti-Islamic content and white supremacist hate speech, the report says.

A police spokeswoman told the Herald: "Police are aware of a comment made on Twitter and are making enquiries."

The Prime Minister's office has been contacted for comment.

A Twitter spokesperson said Twitter rules prohibit violent threats.

"We took action shortly after we received the first report on this Tweet, and our teams continue to work proactively to remove violative and illegal content from the service in relation to the Christchurch attack.

"We also continue to cooperate with law enforcement to facilitate their investigations as required.

"We strongly encourage people on Twitter to report violative and illegal content so we can take action. Retweeting or sharing screenshots of this type of content only serves to spread it further and gives it more visibility to more people."

The posts were pointed out to Twitter itself after the social networking giant tweeted a message of support following the two Christchurch mosque attacks last Friday which left 50 people dead.

Twitter Public Policy's tweet - promoted by Twitter Australia - read: "Kia Kaha. We stand together with New Zealand."

The message came at 1.48pm, closely after the nationwide two-minute silence for the mosque victims.

Some said the tweet was an "empty gesture" because of racist and violent tweets being left on the website with no action from Twitter.

Social media platforms have been criticised after a livestream video showing Friday's attack was able to be uploaded and shared.

The day after the attack, Twitter said it was "monitoring and removing any content that depicts the tragedy, and will continue to do so in line with the Twitter Rules".

It also recommended people report Tweets that break any of the social media company's rules.

Facebook said in statement that the shooter's livestream was viewed fewer than 200 times and was not reported until 12 minutes after the 17-minute broadcast ended. The company said the video was viewed about 4000 times in total before being removed.

The company also said that their artificial intelligence was part of the problem.

"While [AI's] effectiveness continues to improve, it is never going to be perfect," Facebook said in a statement on Thursday night.

"People will continue to be part of the equation, whether it's the people on our team who review content, or people who use our services and report content to us."

 

Source: NZ Herald

Comments

NZ PM receives death threats on social media

New Zealand's Prime Minister Jacinda Ardern leaves after the Friday prayers at Hagley Park outside Al-Noor mosque in Christchurch, New Zealand March 22, 2019. REUTERS/Jorge Silva

New Zealand Prime Minister Jacinda Ardern has received death threats on social media, local media reports.

Police are investigating death threats sent to Jacinda Ardern on social media, according to a report published on the NZ Herald.

A Twitter post containing a photo of a gun and captioned "You are next" was sent to the Prime Minister.

The Herald understands that the post had been up for more than 48 hours before the sender's Twitter account was suspended before 4pm after it was reported by various people.

Another post tagged to Ardern and NZ Police and had the same photo with "next it's you".

The suspended account contained anti-Islamic content and white supremacist hate speech, the report says.

A police spokeswoman told the Herald: "Police are aware of a comment made on Twitter and are making enquiries."

The Prime Minister's office has been contacted for comment.

A Twitter spokesperson said Twitter rules prohibit violent threats.

"We took action shortly after we received the first report on this Tweet, and our teams continue to work proactively to remove violative and illegal content from the service in relation to the Christchurch attack.

"We also continue to cooperate with law enforcement to facilitate their investigations as required.

"We strongly encourage people on Twitter to report violative and illegal content so we can take action. Retweeting or sharing screenshots of this type of content only serves to spread it further and gives it more visibility to more people."

The posts were pointed out to Twitter itself after the social networking giant tweeted a message of support following the two Christchurch mosque attacks last Friday which left 50 people dead.

Twitter Public Policy's tweet - promoted by Twitter Australia - read: "Kia Kaha. We stand together with New Zealand."

The message came at 1.48pm, closely after the nationwide two-minute silence for the mosque victims.

Some said the tweet was an "empty gesture" because of racist and violent tweets being left on the website with no action from Twitter.

Social media platforms have been criticised after a livestream video showing Friday's attack was able to be uploaded and shared.

The day after the attack, Twitter said it was "monitoring and removing any content that depicts the tragedy, and will continue to do so in line with the Twitter Rules".

It also recommended people report Tweets that break any of the social media company's rules.

Facebook said in statement that the shooter's livestream was viewed fewer than 200 times and was not reported until 12 minutes after the 17-minute broadcast ended. The company said the video was viewed about 4000 times in total before being removed.

The company also said that their artificial intelligence was part of the problem.

"While [AI's] effectiveness continues to improve, it is never going to be perfect," Facebook said in a statement on Thursday night.

"People will continue to be part of the equation, whether it's the people on our team who review content, or people who use our services and report content to us."

 

Source: NZ Herald

Comments