Facebook Explains What Went Wrong with New Zealand Shooting Live Stream
A.I. tools failed to immediately detect and remove video of the attacks.
Facebook says that certain artificial intelligence tools failed at quickly removing videos from the alleged New Zealand mosque shooter's live stream of the attacks, which left 50 people dead.
The company says the artificial intelligence process is used every day to remove nudity, terrorist propaganda and other graphic violence from the platform.
"However, this particular video did not trigger our automatic detection systems," Facebook Vice President of Integrity Guy Rosen wrote in a blog post, because the A.I. system did not have a sufficient number of examples in its training data . "To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare."
According to Rosen, the shooter's original live stream was viewed fewer than 200 times during the broadcast, and no Facebook user reported the video. Before Facebook became aware of the video, the link had already been spread on the Internet, "In the first 24 hours, we removed more than 1.2 million videos of the attack at upload...Approximately 300,000 additional copies were removed after they were posted."
Some New Zealand businesses have threatened to boycott Facebook, due to the way the company handled videos of the attack.
According to CBS News , some "critics are pointing to what they say is a lack of investment and controls from Facebook to safeguard against issues like misinformation and violent content."
You Might Like: