AI/Analytics

Ethical use of artificial intelligence

Is Artificial Intelligence (AI) overhyped, dangerous, or does it present opportunities not to be missed?

It probably is all three, but not in ways you might think just from reading the clickbait. As one who has devoted much of my career to AI—as a student, researcher, and practitioner, my position is that AI is a powerful tool, to be used wisely and carefully. The ethical dangers of AI, as with any powerful tool, come from human beings – failing to understand it or to use it intentionally and properly.

The doomsday scenario: Computers take over

I must start by dismissing the doomsday scenario – computers are far from doing sentient evil.

It sounds silly even to consider this possibility, but, as Maureen Dowd wrote in Vanity Fair (describing the difference between famous doomsayer Elon Musk and AI guru Demis Hassabis), it’s “just a friendly little argument about the fate of humanity”. A number of outspoken geniuses, including Musk, Marvin Minsky, and Steven Hawkins, have warned about computers superseding us.  

While I think the robo sapiens threat worthy of consideration by visionaries and philosophers, I side with the Andrew Ng camp (he’s the founder of Google Brain, a pioneer in what’s known as “deep learning”): I’ll worry about this when we have to worry about overpopulation on Mars. That’s when you should start worrying, too.

The real threat: We rely too much on insensible machines

The dangers of deploying AI stem not from machines being too smart, but from their not being smart enough. Even at their most powerful, they are not nearly as smart as we are:

Programs can’t explain what the heck they are doing or why they are doing it, overcome their biases, or consider that what they’ve been programmed to do may actually be wrong (factually or in its resultant harm on people). That’s what people are for.

Most of us are constantly explaining our actions and considering their effects on other people….so if we are in a position where we are using AI (such as for discovering strategies, detecting anomalies, or mining and visualizing data), we have to do the human job.

As we move toward deploying AI systems in our organizations, this gives us three things to watch out for:

  1. AI can encode bias
  2. AI can drive decisions without an explainable basis
  3. AI can “discover” things that infringe on personal privacy

AI can encode bias

Even before today’s AI-based learning programs, computer results could enforce biases that can be considered unfair. The bias can stem from the data used, either because the data are flawed or because aspects of the data are ignored. For example, older drivers can have more accidents than middle-aged drivers. Does this make it fair to charge more for their insurance, given that insurance rates already factor in their driving records? A system trained to pick out pictures of weddings might look for women in white dresses with veils and holding bouquets. This will miss many pictures! Is it fair to overlook the “unconventional” wedding?

From the nonprofit perspective, consider a hypothetical system to rank applicants for a college prep program. It might detect a statistical correlation between zip code of residence and acceptance at college that has the effect of discriminating based on income, or of perpetuating societal biases. It simply doesn’t work to say “that’s what our program came up with” – using a result that reflects an unfair bias can be considered unfair.

One approach to preventing this kind of bias is to segregate all data into a training set and a testing set. After the system has been trained, the testing set is used to compare the AI system’s result to the decisions that a human being would make. But only an ethical, intelligent person can draw conclusions from the system’s results on the test set. A most excellent book “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy” (September 6, 2016) by Cathy O’Neil is available for the interested reader.

AI can provide decisions with no explainable basis

The incredible advances in AI have been primarily driven by technology advances in computing power and the availability of very large data sets rather than advances in the underlying theories. IBM’s Deep Blue chess-playing program finally beat the world champion, Gary Kasparov, not by knowing more about chess but by quickly evaluating many billions of board positions.

In Adnan Darwiche’s excellent and accessible article “Human-Level Intelligence or Animal-Like Abilities?” (Communications of the ACM, October 2018, Vol. 61 No. 10, Pages 56-67), the author explains how so many current machine learning systems are “function-based” as opposed to “model-based”. In the function-based approach, the system has automatically created statistical models that cannot be translated into heuristics that humans can understand. For example, a system may be able to tell a picture of a man from a picture of a woman, but cannot say that whether a beard or Adam’s apple is relevant to this judgment.   

Systems that make or drive decisions for unknown reasons lead to concern. Without knowing what drives the result, it’s hard for a human being to assess whether it’s correct, both from a practical and from an ethical point of view. Also, without an explanation of the result, it’s hard to correct the pattern that led to an incorrect result.

AI can “discover” things that infringe on personal privacy

Back in 2012, there was an apocryphal story about Target analyzing purchases and providing “targeted” coupons. This went awry when a father received coupons for diapers – resulting in his daughter’s “pregnancy cat” being let out of the bag.

Because AI can analyze large quantities of data, it can put together information that when combined, can provide uncomfortable conclusions. Analyzing the data from a smartphone of everywhere its owner travelled to can highlight regular trips to parole officers or medical clinics or other places that individuals might want to keep private. AI techniques can analyze polling data to micro-target social media postings to influence an election – sound scary but familiar? And the use of face recognition technology combined with omnipresent cameras can make state-sponsored surveillance possible today. In China, you can get an automated citation based on a camera that caught you jaywalking. This power is real—computers can be programmed to recognize you and tell exactly where you are—and it is up to us to tell when this is “cool” and when it’s “creepy”. It’s cool when your phone suggests an alternate route for you due to street closures. It’s creepy when your phone suggests you buy new underwear.

How can it help

While we concentrated here on areas where AI can be potentially unethical, it is worth noting that there are far more “ethical” uses of AI than potentially unethical uses. Especially with model-based approaches, AI can help solve problems (think medical diagnoses; troubleshooting equipment), bring to light potential “diamonds in the rough” for consideration (think scholarships; speakers), ensure consistency in evaluation (think loans) – not to mention those self-driving cars we are eagerly waiting for. The opportunities of AI outweigh the risks, but only when smart computers work together with smart people.

Summary

The rapidly widening availability of AI technology, more than other technologies, brings into focus its potential for both ethical and unethical uses. Before letting an AI system run wild, it is prudent to conduct testing and carefully think through the behavior from this perspective. Experimentation and review / analysis will help to keep your AI systems on the straight and narrow.

About the author

Lisa Rau is a founder and chairman of Fionta, a Salesforce.org Premium Consulting Partner that focuses on the nonprofit sector. She served as CEO from 2001 – 2018, during which time Fíonta provided support to nearly 1,000 nonprofits. She has a B.S., M.S. and Ph.D. in Computer Science, focusing on Artificial Intelligence. Lisa started at the Berkeley Artificial Intelligence Research group, where she was lisa@berkeley.edu in the early years of the Internet, and as an NSF visiting scholar had an office at Penn adjacent to the Eniac, the first fully programmable electronic computer (famously programmed by female engineers while men took the credit). She led AI research for eight years at GE Corporate Research, where she published 50-some articles in peer-reviewed journals and conference proceedings. In 1990 the magazine Popular Science quoted Lisa as saying, “In the future we can imagine the computer as a librarian who not only knows where all the information in the library is kept, but has read and understood everything.” She feels partly vindicated, but is still imagining what the future holds.