MoneyINC Logo
25+
Years of
Trust
13,000+
Articles
Written 
10M+
Annual 
Readers
300+
Global
Mentions

Does Artificial Intelligence Have Ethics?

In 2016, Microsoft released a chatbot designed to interact with people over Twitter. Enabled with an AI routine that analyzed speech, the bot was supposed to show how machines could mimic human communication. Unfortunately, Microsoft had to remove the bot when it began tweeting racist and sexist comments; its AI engine was flooded with hate speech from pranksters and other bad actors online. Now the AI routine itself was certainly not sexist or racist, it was merely imitating speech based on the data it received. I’m sure this incident led to a lot of jokes about how AI-enabled machines will become evil geniuses bent on subjugating humanity. But I think what it really proves is that the real threat in our current generation of AI isn’t AI, it’s ourselves.

I’ve previously written about how the fear that AI-enabled machines will make human work irrelevant is unfounded. In short, while humans aren’t as fast or accurate as computers when it comes to analyzing data, computers are no match for us when it comes to the informed application of that analysis, creativity, lateral thinking and emotional intelligence. We possess critical thinking and experience that today’s computers simply can’t replicate. Likewise, computers don’t have the capability for ethical thinking that humans do. Despite its sophistication, AI at the end of the day is just another tool. And like any tool, it has no ethics but can be used to perform unethical things when wielded by an unethical, unscrupulous or just ignorant person. More common than not, however, is AI performing unethically because it hasn’t been trained properly.

Take AI-powered visions systems, for example. Recent research revealed that a popular facial recognition software platform had “much higher error rates in classifying the gender of darker-skinned women than for lighter-skinned men.” Is the AI racist? No, but it does replicate the institutional biases inherent in society. In this case, the training models used to instruct the AI algorithm to identify faces were comprised mostly of white male faces. Accordingly, the algorithm performed better identifying the faces with which it had more experience. Now imagine law enforcement using this AI-enabled vision technology to scan crowds in busy public spaces for the face of a wanted criminal. If that criminal were a person of color, the chances of the AI incorrectly identifying an innocent person of color as the criminal would be higher than it would for a white criminal.

The science fiction author Isaac Asimov famously developed the Three Laws of Robotics. The laws set forth a simple ethical framework for how robots should interact with humans, and the laws are unalterably fixed in the robot’s positronic brain (aka AI); they cannot be bypassed. In their entirety, the laws are:

• A robot may not injure a human being or, through inaction, allow a human being to come to harm.
• A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
• A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

With those rules in place, humans could rest assured that their interactions with robots would be safe. Even if the robot had been ordered by a human to cause harm to another human, it’s “digital conscience” would stop it from carrying out the order.
I believe a similar code of ethics should be applied when developing AI. Data scientists and IT professionals will need to review the results of their AI algorithm’s analysis not only for accuracy, but also for the fair and ethical application of that analysis.

While this will take much work and collaboration between businesses, academia and end-users, it is critical that we adopt ethical fail safes when applying AI technology in our daily lives. AI is developing rapidly and the risks that it could enable unethical behavior by its users, even unintentionally, is real. These fail safes need to be defined now so they can be used to manage the performance of AI applications in the future.

Flavio Villanustre

Written by Flavio Villanustre

Dr. Flavio Villanustre is CISO and VP of Technology for LexisNexis® Risk Solutions part of RELX. In this position, he is responsible for Information Security and he leads the HPCC Systems® overall platform strategy and new product development. Dr. Villanustre is also involved in a number of projects involving Big Data integration, analytics, and Business Intelligence. Prior to 2001 when he began his career at LexisNexis Risk Solutions, Flavio served in a variety of roles at different companies including Infrastructure, Information Security, and Information Technology. In addition to this, Dr. Villanustre has been involved with the open source community for more than 15 years through multiple initiatives. Some of these include founding the first Linux User Group in Buenos Aires (BALUG) in 1994, releasing several pieces of software under different open source licenses, and evangelizing open source to different audiences through conferences, training, and education. Prior to his technology career, Dr. Villanustre was a neurosurgeon.

Read more posts by Flavio Villanustre

Related Articles

Stay ahead of the curve with our most recent guides and articles on , freshly curated by our diligent editorial team for your immediate perusal.
As featured on:

Wealth Insight!
Subscribe to our Exclusive Newsletter

Dive into the world of wealth and extravagance with Money Inc! Discover stock tips, businesses, luxury items, and travel experiences curated for the affluent observer.
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram