Does Artificial Intelligence Have Ethics?

In 2016, Microsoft released a chatbot designed to interact with people over Twitter. Enabled with an AI routine that analyzed speech, the bot was supposed to show how machines could mimic human communication. Unfortunately, Microsoft had to remove the bot when it began tweeting racist and sexist comments; its AI engine was flooded with hate speech from pranksters and other bad actors online. Now the AI routine itself was certainly not sexist or racist, it was merely imitating speech based on the data it received. I’m sure this incident led to a lot of jokes about how AI-enabled machines will become evil geniuses bent on subjugating humanity. But I think what it really proves is that the real threat in our current generation of AI isn’t AI, it’s ourselves.

I’ve previously written about how the fear that AI-enabled machines will make human work irrelevant is unfounded. In short, while humans aren’t as fast or accurate as computers when it comes to analyzing data, computers are no match for us when it comes to the informed application of that analysis, creativity, lateral thinking and emotional intelligence. We possess critical thinking and experience that today’s computers simply can’t replicate. Likewise, computers don’t have the capability for ethical thinking that humans do. Despite its sophistication, AI at the end of the day is just another tool. And like any tool, it has no ethics but can be used to perform unethical things when wielded by an unethical, unscrupulous or just ignorant person. More common than not, however, is AI performing unethically because it hasn’t been trained properly.

Take AI-powered visions systems, for example. Recent research revealed that a popular facial recognition software platform had “much higher error rates in classifying the gender of darker-skinned women than for lighter-skinned men.” Is the AI racist? No, but it does replicate the institutional biases inherent in society. In this case, the training models used to instruct the AI algorithm to identify faces were comprised mostly of white male faces. Accordingly, the algorithm performed better identifying the faces with which it had more experience. Now imagine law enforcement using this AI-enabled vision technology to scan crowds in busy public spaces for the face of a wanted criminal. If that criminal were a person of color, the chances of the AI incorrectly identifying an innocent person of color as the criminal would be higher than it would for a white criminal.

The science fiction author Isaac Asimov famously developed the Three Laws of Robotics. The laws set forth a simple ethical framework for how robots should interact with humans, and the laws are unalterably fixed in the robot’s positronic brain (aka AI); they cannot be bypassed. In their entirety, the laws are:

• A robot may not injure a human being or, through inaction, allow a human being to come to harm.
• A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
• A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

With those rules in place, humans could rest assured that their interactions with robots would be safe. Even if the robot had been ordered by a human to cause harm to another human, it’s “digital conscience” would stop it from carrying out the order.
I believe a similar code of ethics should be applied when developing AI. Data scientists and IT professionals will need to review the results of their AI algorithm’s analysis not only for accuracy, but also for the fair and ethical application of that analysis.

While this will take much work and collaboration between businesses, academia and end-users, it is critical that we adopt ethical fail safes when applying AI technology in our daily lives. AI is developing rapidly and the risks that it could enable unethical behavior by its users, even unintentionally, is real. These fail safes need to be defined now so they can be used to manage the performance of AI applications in the future.


Add Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Michael Bloomberg
The 20 Richest People in the World in 2019
Gregory Brown
10 Things You Didn’t Know about Motorola CEO Gregory Q. Brown
OJ Simpson
How O.J. Simpson Still Has a Respectable Net Worth
American Axle CEO
10 Things You Didn’t Know about American Axle CEO David Dauch
Advice on Obtaining a Credit Card as a College Student
Takeaways from The 2019 Student Card Survey from Creditcard.com
American Tower
Why American Tower is a Solid Long-Term Dividend Stock
The 10 Best Credit Cards for Back to School Shopping
20 ‘Smart’ Technologies That Will Be Available Before We Know It
embedded personal devices
Where are We With Embedded Personal Devices?
20 Smartphone Technologies That Will Blow You Away
bullets that change direction
Where are We With Bullets that Change Direction?
Swift and Sons
The 20 Best Steakhouses in Chicago
Caladesi Island
The 20 Best Beaches in Florida in 2019
Why La Cosecha Argentinian Steakhouse is One of Miami’s Finest Steakhouses
New Orleans Museum of Art
10 Reasons to Visit the New Orleans Museum of Art
Rolls Royce Silver Seraph
The Rolls Royce Silver Seraph: A Closer Look
The Rolls-Royce Silver Spirit
The Rolls-Royce Silver Spirit: Its History and Its Evolution
Rolls Royce Twenty
A Closer Look at the Rolls Royce Twenty
2003 Rolls Royce Phantom
A Closer Look at the 2003 Rolls Royce Phantom
A Closer Look at the Hublot Bigger Bang
IWC Big Pilot's Watch Constant-Force Tourbillon Edition Le Petit Prince
A Closer Look at the IWC Big Pilot’s Watch Constant-Force Tourbillon Edition Le Petit Prince
A Closer Look at the Jaeger-LeCoultre Master Ultra Thin Tourbillon
Time Traveling: The Hublot Classic Fusion Zirconium