Network-assessment

The recent attacks against mosques in Christchurch, New Zealand captures the attention of the world. Copies of the video footage of the actual event were spreading like wildfire in social media. The terrorists themselves launched a live video of the shooting incident that killed at least 50 people from two mosques. According to the New Zealand police, the shootings were by several armed men and detained four men and women. We continue to investigate if there are other accomplices. The schools in Christchurch were temporarily closed, and the police urged them to refrain from going out unnecessarily.

Facebook, represented by its VP and Deputy General Counsel, Chris Sonderby expressed sadness of the incident and the efforts that Facebook is doing in order to regulate the spread of the terroristic video posted by the terrorist themselves. “We have been working directly with the New Zealand Police to respond to the attack and support their investigation. We removed the attacker’s video within minutes of their outreach to us, and in the aftermath, we have been providing an on-the-ground resource for law enforcement authorities. We will continue to support them in every way we can. In light of the active investigation, police have asked us not to share certain details,” explained Sonderby in the company’s official blog post.

The company provided some statistics regarding the broadcasted video by the terrorist using the Facebook platform. The social media giant clarified that only 200 people were able to view the incident live, but none reported the existence of live streaming. The video received an additional 4000+ views after the incident before Facebook admins were able to take it down. They only received a report from a user 12 minutes after the incident ended, but Facebook was not able to contain the video to just in Facebook, as a copy of the video was already uploaded on other platforms before the takedown.

The company used its AI technology in order to filter the entire Facebook platform for copies of the same video. This effectively cancels all the shares of the original video to the rest of Facebook’s global community. Sonderby also stressed that such video broadcast is in itself a violation of their Community Standards. They are also seeking and banning any newly created accounts with the goal pretending to be the terrorist (posers), their detection algorithm also detects deliberate changing of the voice within the video in order to bypass detection.

“In the first 24 hours, we removed about 1.5 million videos of the attack globally. More than 1.2 million of those videos were blocked at upload, and were therefore prevented from being seen on our services. We identified abusive content on other social media sites in order to assess whether or how that content might migrate to one of our platforms,” added Sonderby.

The Facebook official stressed the importance of its strong coordination with the Global Internet Forum to Counter Terrorism (GIFCT). They sent to GIFCT more than 800 copies of related videos that the platform collected in hope to shed light into the case. The company expressed its full cooperation with New Zealand authorities when it comes to information sharing with regards to the identity of the suspects. The Facebook VP declared that under their Community Standards, the social media giant cannot be used as a platform for spreading terrorism content, and anyone caught doing so will be facing the laws of the country where the footage was taken.

Related Resources:

Facebook In Hot Water Again In 2019?

Is Fake News On Facebook Actually Killing People In Nigeria?

Stolen Facebook Accounts Now For Sale On The Dark Web

Why Facebook Is Now Seeking A Friend In The Security Business

Post a comment