Facebook Making Its Own AI Chips to Stop Violent Livestreaming

There are a lot of questions to be raised about the news that Facebook is making its own artificial intelligence (AI) chips to stop violent livestreams from being broadcast across its servers. Perhaps the biggest one is whether or not the technology will be sophisticated enough to be able to distinguish between “good” violent content and “bad” violent content. Then there is the issue of privacy, something that Mark Zuckerberg is still battling with. Finally, there is the question of how much human intervention is needed to adequately perform the task.

Currently, AI technology is used to some degree in identifying “bad” violent livestreams, actions such as murders, suicides, and other actions that result in severe personal harm to another human being or animal. Though the direction Facebook is moving is to make the filtering process more AI dependent because it is simply much faster, the issue of how to identify a “bad” act of violence is not so easily solved. Facebook is a very international company, and the types of violence that can potentially be broadcast throughout the world can vary greatly depending on the country that has the permission and the technology to create livestreams.

For example, the livestream broadcast of the act of police violence against Philandro Castle was first decided to be too violent, and deleted from the eyes Facebook viewers. Upon further review, Facebook reversed itself, basically citing concepts such as the “public good” and “the need to know.” Though the livestream was clearly violent, the company decided a graphic violence warning was sufficient to allow the video to play on. How the new and improved AI chip would be able to tell the difference is one vexing problem for Facebook and continues to be debated.

Privacy will always be an issue for Facebook, and not just from the fake ad angle. If a “good” violent event is broadcast, how is it possible for the AI technology to determine whether or not anyone’s privacy has been violated during the broadcast? Public venues are almost never a problem, but what about private or corporate buildings or property? GPS coordinates definitely will help identify the location, but it is not 100 percent guaranteed. Facebook’s current estimated average time for identifying a violent livestream is 10 minutes, and the goal of the new technology is to significantly reduce that time. But 10 minutes is a long time in the digital world, so the possibility of thousands of people watching a violent event and potentially having someone’s privacy violated remains.

The issue of human intervention continues to rear its head amid this AI idea. One advantage to letting technology handle the evaluation is that it is objective, and not subject to a human’s personal or political bent. We already know that Zuckerberg testified before the Congress of the United States that he is aware of bias among Facebook employees, while maintaining their right to hold their own views without it interfering with the performance of their jobs. Yet it is clear that bias can go unmonitored and uncontrolled until it is later identified. Concerning the issue of livestream violence, that identification process can come far too late.

So far we have looked at preempting the video stream from reaching the eyes of Facebook users as much as humanly and technologically possible. But the technology has the potential to affect the human behavior of the person or people livestreaming. Many people who commit murder or broadcast their own suicide do it because they know there is a high potential for having other people see it – reaching their audience in a bizarre way. Simply knowing the technology exists is very likely to discourage mentally ill people from using Facebook to broadcast their personal acts of violence in the future. Because the AI chip will use a complex algorithm that few people know, it is extremely unlikely someone will be able to discover a way around the technological screening process.

What Facebook’s effort says to us is that technology can provide a wide range of social benefits but it still needs human interaction to make the innate human decisions that separate man from machine. Facebook is not looking for our vote to move forward and initiate the new technology. What we will see is the result, which is hopefully a better user experience, something that Facebook has always promised to continue seeking for everyone.


Add Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

10 Things You Didn’t Know About Verizon CEO Hans Vestberg
How Stan Lee Achieved a Net Worth of $50 Million
Yes, Working from Home is a Real Job and It’s Harder than You Think
10 Things You Didn’t Know about Chevron CEO Michael Wirth
The 10 Best Visa Credit Cards of 2018
How Retirement Will Be Completely DIfferent a Decade From Now
10 Benefits of the Wells Fargo Student Credit Card
7 Surprising Things That Can Ruin Your Credit Rating
AVA is the AI Machine That’s Nearly As Human as You
Want A Personal Flying Device? Here Are The Contenders
How Relevant Is Cognitive Marketing and Artificial Intelligence for the Healthcare Industry?
How Will You be Affected Now that Net Neutrality is Gone?
The Five Best Steakhouses in Los Angeles
10 Things to Do in Palm Springs for a First Time Visitor
The Five Best 5-Star Hotels in Las Vegas
The Five Best Steakhouses in Houston, Texas
The History and Evolution of the Chevy Suburban
The History and Evolution of the Cadillac XLR
The Five Best Mercedes Convertible Models of All-Time
The History and Story Behind the Lamborghini Logo
The Five Best Squale Watches on the Market Today
The Five Best Running Watches for Under $500
The Five Best Digital Watches for Under $500
The Five Best Armani Watches on the Market Today