Does Artificial Intelligence Have Ethics?

In 2016, Microsoft released a chatbot designed to interact with people over Twitter. Enabled with an AI routine that analyzed speech, the bot was supposed to show how machines could mimic human communication. Unfortunately, Microsoft had to remove the bot when it began tweeting racist and sexist comments; its AI engine was flooded with hate speech from pranksters and other bad actors online. Now the AI routine itself was certainly not sexist or racist, it was merely imitating speech based on the data it received. I’m sure this incident led to a lot of jokes about how AI-enabled machines will become evil geniuses bent on subjugating humanity. But I think what it really proves is that the real threat in our current generation of AI isn’t AI, it’s ourselves.

I’ve previously written about how the fear that AI-enabled machines will make human work irrelevant is unfounded. In short, while humans aren’t as fast or accurate as computers when it comes to analyzing data, computers are no match for us when it comes to the informed application of that analysis, creativity, lateral thinking and emotional intelligence. We possess critical thinking and experience that today’s computers simply can’t replicate. Likewise, computers don’t have the capability for ethical thinking that humans do. Despite its sophistication, AI at the end of the day is just another tool. And like any tool, it has no ethics but can be used to perform unethical things when wielded by an unethical, unscrupulous or just ignorant person. More common than not, however, is AI performing unethically because it hasn’t been trained properly.

Take AI-powered visions systems, for example. Recent research revealed that a popular facial recognition software platform had “much higher error rates in classifying the gender of darker-skinned women than for lighter-skinned men.” Is the AI racist? No, but it does replicate the institutional biases inherent in society. In this case, the training models used to instruct the AI algorithm to identify faces were comprised mostly of white male faces. Accordingly, the algorithm performed better identifying the faces with which it had more experience. Now imagine law enforcement using this AI-enabled vision technology to scan crowds in busy public spaces for the face of a wanted criminal. If that criminal were a person of color, the chances of the AI incorrectly identifying an innocent person of color as the criminal would be higher than it would for a white criminal.

The science fiction author Isaac Asimov famously developed the Three Laws of Robotics. The laws set forth a simple ethical framework for how robots should interact with humans, and the laws are unalterably fixed in the robot’s positronic brain (aka AI); they cannot be bypassed. In their entirety, the laws are:

• A robot may not injure a human being or, through inaction, allow a human being to come to harm.
• A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
• A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

With those rules in place, humans could rest assured that their interactions with robots would be safe. Even if the robot had been ordered by a human to cause harm to another human, it’s “digital conscience” would stop it from carrying out the order.
I believe a similar code of ethics should be applied when developing AI. Data scientists and IT professionals will need to review the results of their AI algorithm’s analysis not only for accuracy, but also for the fair and ethical application of that analysis.

While this will take much work and collaboration between businesses, academia and end-users, it is critical that we adopt ethical fail safes when applying AI technology in our daily lives. AI is developing rapidly and the risks that it could enable unethical behavior by its users, even unintentionally, is real. These fail safes need to be defined now so they can be used to manage the performance of AI applications in the future.


Add Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

John Engel
10 Things You Didn’t Know About WESCO International CEO John Engel
GPO
Should Your Company Use a GPO?
Dan Snyder
How Dan Snyder Achieved a Net Worth of $2.3 Billion
Wilbur Ross
20 Things You Didn’t Know about Wilbur Ross
Enbdrige
Why Enbridge (ENB) is a Great Dividend Stock for Retirees
The 10 Best Chase Credit Cards of 2019
AT&T Building
Why AT&T is an Excellent Dividend Stock for Retirees
Why Apple is a Solid Dividend Stock for the Next 50 Years
Biosphere 2
Closed Ecological Systems: Can They Save the Future?
brain computer interface
How Close is Brain-Computer Interface To Being a Reality?
agricultural robots
What Are Agricultural Robots and How Will They Change the Future?
New Orleans Arcology
What is an Arcology and How Close are We To Having One?
High Line
10 Reasons You Should Walk the NYC High Line
The Porter House
Why Porter House New York is One of the Best NYC Steakhouses
10 Reasons to Stay at the Ritz-Carlton Dorado Beach
Green-Wood Cemetery
10 Reasons You Should Visit Green-Wood Cemetery
1966 Shelby GT350
The 20 Greatest Muscle Cars of All-Time
1956 Cadillac Series 62 Eldorado Seville Coupe
The 20 Best Cadillac Eldorado Models of All Time
Veneno Showroom
The Lamborghini Veneno Roadster: A Rare and Limited Edition
1961 ferrari 250 gt swb california spider
A Closer Look at the 1961 Ferrari 250 GT California SWB Spider
Patek Philippe Pink Gold Pocket Watch 1894
A Closer Look at the $2.29 Million Patek Philippe Pink Gold Pocket Watch 1894
Patek Philippe Perpetual Calendar Chronograph Wristwatch in Pink Gold
A Closer Look at the $2.28 Million Patek Philippe Perpetual Calendar Chronograph Wristwatch in Pink Gold
20 Things That You Didn’t Know About Breitling Watches
2019 Breitling
The 10 Best Breitling Watches of 2019