What is AI Anyway, and How is it Relevant to my Nonprofit?

While Artificial Intelligence (AI) has been a research discipline for over 60 years, it has only recently blossomed and permeated consumer and business technology and applications. The history of AI shows cycles of wild predictions and enthusiasm, followed by disillusionment when predictions about its utility were foiled by the difficult realities. AI has always been the subject of great hope and great hype – recognizing its real potential often requires understanding its weaknesses. After defining AI in practical terms, we cover some of the ways it can be most effectively used in nonprofit organizations.

What is AI Anyway, and How is it Relevant to my Nonprofit?

What is AI?

Researchers in AI joke that once an application of AI works, it no longer is considered AI. Behind this cynicism is the persistent truth that human intelligence is still powerful and mysterious, and computer programs usually work not by mimicking the complexity of the human brain but by simplifying problems so that computers can master them using the great tools that computers have, namely, vast memories and calculating power.

AI is “simply” creating computer software that can accomplish tasks (at least with some level of competence and usefulness) that seem to require intelligence in humans. Some examples of such tasks are communicating in everyday “natural” language, solving problems, recognizing images and patterns, learning new skills, and making decisions and plans. One reason AI has gone through booms and busts is that people often underestimate the difficulty of tasks they find “easy” (such as understanding language) and overestimate what computers can do. Unless operating within very constrained domains, AI systems are limited by their lack of context – knowledge that human beings readily bring to bear that stems from our “simply” living in the world with our wealth of impulses and experiences. Because of this, understanding humor, exhibiting basic “common sense”, and working with ambiguity are still intractable areas for computers.

My friend and colleague Jerry Hobbs during his decades as a principal scientist and program director of the Natural Language Program in the Artificial Intelligence Center at SRI International, Menlo Park, California declared decades ago, “computers are idiot savants”, and this is still true. A computer program may beat the world’s best chess player, master the game of Go, play Jeopardy, or cite quotations from a library a million times larger than the works of Shakespeare. But that same program will be unable to perform even the most basic human task, like understanding the point of a story you would tell your 3-year-old child.

While all the reasons for skepticism, especially the inability to harness commonsense knowledge, still exist, AI applications have steadily progressed by taking advantage of increasing computing power, vast networks and memories, and decreasing costs. Deep Blue beat Gary Kasparov at chess not by being a better player so much as by searching billions of possible moves. Understanding that AI success often comes from the power of data and cycles rather than from intuition is part of understanding how to use AI successfully.

AI is actually a broad field that encompasses distinct areas: machine learning, robotics, natural language processing, speech recognition and generation, image processing, problem-solving, and others. It is also a broad collection of technical approaches – from rule-based systems to “neural net”-based systems that use complex mathematical functions to learn complex tasks. But in all these arenas, AI is, in its essence, a computer program that performs a human-like function. Does it require “intelligence” to recognize a face? Drive a vehicle? Play chess? Predict the likely outcome of a game? Have a conversation? If the behavior used to be the province of people and requires some analytical processing and/or expertise, it can be considered AI when done by a machine. “Narrow” AI systems are trained for, and can only, perform specific and constrained tasks. More general AI systems would demonstrate human cognitive functionality, still doing something useful or interesting even when presented with inputs its programmer never expected. Advances in machine learning have pushed the set of tasks narrow AI systems can perform – but no AI technology yet approaches comprehensive human-like intelligence across core areas of human cognitive activity. So, if you are looking for AI to solve a problem in your organization, you had better understand the problem first, then have an idea how a computer might be able to solve it.

Relevance to Nonprofits

There are a wide variety of nonprofit organizational tasks that AI can help to solve, so it’s hard to generalize, but one guideline is to look in areas where your organization has a large amount of data that has to be handled in some way that can be viewed as repetitive or predictable, or that requires significant analysis across many pieces of data. I’ll give some examples below.

Leading technology vendors have made AI tools available to developers—Google’s offerings, Salesforce’s suite of Einstein capabilities, Microsoft’s Azure AI, Facebook’s AI tools for developers or Amazon’s offering. These tools provide developers the ability to write custom AI-based applications for specific applications. They can be made to work if the subject area is sufficiently constrained and there exists a sufficient quantity and quality of data to work with. To make these tools “work”, much experimentation, training, and customization is still required – and the behavior of the resulting systems could still disappoint in many areas.

Image Processing

Image processing is compute-intensive, and so has benefited greatly from increases in computing speed and processing power. Any nonprofit where people analyze images could benefit from image processing technology. Some examples include the analysis of aerial images to calculate deforestation, coastal erosion, pollution emitted from smokestacks, bleaching of the coastal reefs or glacial melting. Facial recognition can help take thousands of digital photos taken at a gala and tag them with the individuals in the photos. The hard part of using AI in image processing applications is that it works best when there is a large set of manually annotated examples to train from. And, as recent news of autonomous vehicles and Google image tagging “fails” show, no system will be perfect.

Natural Language Processing

Natural Language Processing (NLP) includes spoken language systems (such as the ones you “talk” to in telephone response systems or personal devices), search engines (to a degree), text categorization and any similar systems, and machine translation, for examples. Many believe advances in NLP arose in part because of the commercial application of the technology toward tasks including spam filtering and automatically extracting information from communications about very specific types of communications (press releases, maintenance logs, or financial transactions). With a sufficiently large set of examples of questions that people ask of Google, paired with the information about what web results those humans click on (indicating a web page that answers the question), systems can seem to understand both the questions and answers. But the computer doesn’t “understand” the language – it is only very good at connecting questions to clicked-on answers.

For nonprofits, the email analysis application of NLP is likely to be most useful in increasing staff productivity for existing tasks. An example of current capabilities is Google’s ability to identify flights or meetings in messages to be easily added to calendars. Email analysis can help abstract a long email thread into an easy-to-understand history, for example “person A requested information”, “person B provided information”, “person A asked a question about information”.

Any nonprofit that has staff reading and categorizing large quantities of text or extracting specific data from the text has good potential applications of NLP. This applies to survey responses, questionnaires, or client intake forms, to name a few. Have a look at what free text is being routinely stored and used in decision making, reporting or analysis and you may have a great opportunity to automate and improve with NLP technology.

Machine Learning

Machine learning algorithms underly the application of tools for predictive analytics, image processing, speech recognition, and NLP. Advances in this area have been central to the current availability of AI tools and programs. Salesforce pioneered a great advance in creating systems that could not just learn to perform tasks, but that can learn the underlying factors required to learn to perform tasks. For example, instead of programming a system with concepts such as “eyes”, “nose” and “mouth”, machine learning image processing systems can develop conceptual representations that correspond to facial features automatically, if such representations are useful in the facial recognition task itself. With this single advance, tools could be used without requiring substantial and difficult custom development – but “simply” pointed to a data set and let run.

Predictive Analytics

AI tools can analyze a set of legacy data, and with machine learning, discover what factors in the legacy data can predict other pieces of data. From a nonprofit perspective, there are many potential applications in this area. Which donor is most likely to give a gift of over $1,000? Which students are most likely to drop out of school? What program design is most likely to achieve the desired outcomes? Which volunteer will be most successful to lead an advocacy initiative? What actions are most likely to increase supporter engagement? With a sufficiently large and complete set of historical data, AI can be instrumental in keeping staff focused on the activities most likely to produce results, and/or help surface potential adverse events in time to prevent them from becoming adverse events.

Decision Making and Bias

One of the best applications of AI is in decision making. Humans are notoriously bad decision makers, afflicted with over 100 different kinds of cognitive biases. We see applications in this area often – decision-making aids such as helping recommend treatment plans to a human doctor. While computers are excellent at making decisions based on data, they can also suffer from their own kind of bias if the programmers are not careful. Biases can become automatically encoded if the data set used to train the system causes the system to confuse correlation with causation. As an example, AI decision-making systems could deny loans to people based on their ethnic origin independent of their financial history.

In Conclusion

AI tools are currently in the marketplace, ready to be experimented with across a variety of nonprofit scenarios. While some tools still require expert programmers to use, others are now embedded in the everyday systems in use. Nonprofits can look forward to reducing the time it takes for tasks that require significant manual effort, increasing accuracy in data-based decision making, and improving the effectiveness of activities taken by automatically prioritizing based on predictions. But, if you’re going to be successful, don’t be overwhelmed by the hype and the hope: be a skeptic. Human beings are still the “brains” of the operation – from ensuring that the systems put in place act without bias to brainstorming potential AI activities and recognizing where you can benefit!

About the Author

Lisa Rau is a co-founder and Chief Growth Officer of Fíonta, a Premium Salesforce.org partner that focuses on technology for nonprofits. She started her career in Artificial Intelligence at the University of California, Berkeley, so long ago that her first e-mail address was just lisa@berkeley. She was a member of the Berkeley Artificial Intelligence Research (BAIR) group for four years, receiving a BS and MS in Computer Science. For the next 8 years, Dr. Rau performed basic research in Artificial Intelligence at GE’s Corporate Research Laboratory, publishing widely in peer-reviewed journals and conference proceedings. Lisa’s team was one of the champions of bringing “empirical” (data-based) AI to natural language processing, as part of a program that was awarded one of Vice President Al Gore’s famous “Hammer” awards for reinventing government. In the decades since, she has helped numerous companies, as an executive and consultant, to develop AI programs and related applications.

Lisa received her Ph.D. in Computer Science, specializing in AI and Cognitive Science, from the University of Exeter, in the UK, studying under Noel Sharkey. Her thesis was entitled “A Computational Approach to Meta-Knowledge: Calculating Breadth and Salience.” At that time, she never would have imagined that either “meta” or “AI” would ever be used in ordinary conversation.