Facebook Making Its Own AI Chips to Stop Violent Livestreaming

There are a lot of questions to be raised about the news that Facebook is making its own artificial intelligence (AI) chips to stop violent livestreams from being broadcast across its servers. Perhaps the biggest one is whether or not the technology will be sophisticated enough to be able to distinguish between “good” violent content and “bad” violent content. Then there is the issue of privacy, something that Mark Zuckerberg is still battling with. Finally, there is the question of how much human intervention is needed to adequately perform the task.

Currently, AI technology is used to some degree in identifying “bad” violent livestreams, actions such as murders, suicides, and other actions that result in severe personal harm to another human being or animal. Though the direction Facebook is moving is to make the filtering process more AI dependent because it is simply much faster, the issue of how to identify a “bad” act of violence is not so easily solved. Facebook is a very international company, and the types of violence that can potentially be broadcast throughout the world can vary greatly depending on the country that has the permission and the technology to create livestreams.

For example, the livestream broadcast of the act of police violence against Philandro Castle was first decided to be too violent, and deleted from the eyes Facebook viewers. Upon further review, Facebook reversed itself, basically citing concepts such as the “public good” and “the need to know.” Though the livestream was clearly violent, the company decided a graphic violence warning was sufficient to allow the video to play on. How the new and improved AI chip would be able to tell the difference is one vexing problem for Facebook and continues to be debated.

Privacy will always be an issue for Facebook, and not just from the fake ad angle. If a “good” violent event is broadcast, how is it possible for the AI technology to determine whether or not anyone’s privacy has been violated during the broadcast? Public venues are almost never a problem, but what about private or corporate buildings or property? GPS coordinates definitely will help identify the location, but it is not 100 percent guaranteed. Facebook’s current estimated average time for identifying a violent livestream is 10 minutes, and the goal of the new technology is to significantly reduce that time. But 10 minutes is a long time in the digital world, so the possibility of thousands of people watching a violent event and potentially having someone’s privacy violated remains.

The issue of human intervention continues to rear its head amid this AI idea. One advantage to letting technology handle the evaluation is that it is objective, and not subject to a human’s personal or political bent. We already know that Zuckerberg testified before the Congress of the United States that he is aware of bias among Facebook employees, while maintaining their right to hold their own views without it interfering with the performance of their jobs. Yet it is clear that bias can go unmonitored and uncontrolled until it is later identified. Concerning the issue of livestream violence, that identification process can come far too late.

So far we have looked at preempting the video stream from reaching the eyes of Facebook users as much as humanly and technologically possible. But the technology has the potential to affect the human behavior of the person or people livestreaming. Many people who commit murder or broadcast their own suicide do it because they know there is a high potential for having other people see it – reaching their audience in a bizarre way. Simply knowing the technology exists is very likely to discourage mentally ill people from using Facebook to broadcast their personal acts of violence in the future. Because the AI chip will use a complex algorithm that few people know, it is extremely unlikely someone will be able to discover a way around the technological screening process.

What Facebook’s effort says to us is that technology can provide a wide range of social benefits but it still needs human interaction to make the innate human decisions that separate man from machine. Facebook is not looking for our vote to move forward and initiate the new technology. What we will see is the result, which is hopefully a better user experience, something that Facebook has always promised to continue seeking for everyone.


Add Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scott Heiferman
20 Things You Didn’t Know About Scott Heiferman
Corey Schiller
20 Things You Didn’t Know About Corey Schiller
Oscar Munoz
20 Things You Didn’t Know About Oscar Munoz
Mandy Ginsberg
20 Things You Didn’t Know About Mandy Ginsberg
Do You Really Need to Save That Much for an Emergency Fund?
10 Recession Proof Dividend Stocks You can Lean On
York Water Stock
20 Reasons You Might Consider York Water Stock
10 Creative Ways to Boost Your Social Security Benefits
airplane technologies
20 Technologies That Will Rule the World in 2020
Chatbots
The Growing Use of Chatbots in Customer Service
Data Breach
Four Reputable Companies That Faced Massive Data Breaches
Video Cards
Why are Video Cards So Expensive? Here’s the Answer
Kensington
The 20 Best Seafood Restaurants in NYC
Seattle Center
The 20 Best Hotels in Seattle 2019
The 10 Best Golf Courses in all of Ireland
The 20 Best Seafood Restaurants in Boston
2016 Mercedes-Benz GLC Class
The 10 Best Mercedes GLC Models of All-Time
Best Cadillac Escalade Models
The 10 Best Cadillac Escalade Models of All-Time
Best Ford Ranger Models
The 10 Best Ford Ranger Models of All-Time
Best Ford Explorer Models
The 10 Best Ford Explorer Models of All-Time
Oris Martini Racing Limited Edition
The 20 Best Oris Watches of All Time
Timex Men's Weekender Tan Leather Strap AnalogWatch
The 20 Best Timex Watches of All-Time
Orient Mako XL
The 20 Best Orient Watches of All-Time
The 20 Best Tissot Watches of All-Time