Using AI for ethical impact with Lisa Rau

Co-founder Lisa Rau sat down with George Weiner, Founder and CEO of Whole Whale, recently for a conversation about artificial intelligence (AI) and opportunities and implications for nonprofits and associations.

Listen to the podcast below!


This week a great conversation talking about what AI really is, how we should be using it, and some of the concerns around AI ethics.

This is Using the Whole Whale, a podcast that brings you stories of data and technology in the nonprofit world.

George Weiner (GW): My name is George Weiner, your host and the Chief Whaler of Thanks for joining us. I’m here with Lisa Rau, the co-founder of Fíonta. Lisa, thanks for joining us.

Lisa Rau (LR): Oh, it’s my pleasure George. Thanks for having me.

GW: Well, we’ve known each other for a while. You’re part of our esteemed vendor network, people that we absolutely trust to do great work out there in the world, but for those of you who don’t know, Lisa, can you tell us a bit about Fíonta?

LR: Sure! So, Fíonta was started 18 years ago to fill what we identified as a void in the nonprofit sector in getting affordable quality technical information technology services. And so, we’ve been working with nonprofits now for the past 18 years, primarily doing work with Salesforce and their nonprofit software and with web design and development and then strategic technology consulting.

GW: Awesome and I will say we love you all for Salesforce support when I’m confused but recently you’ve shown up on my radar in particular because of your work and conversations and statements and writing and talking about AI. Why don’t you walk us through what’s getting you excited about AI?

LR: Well, in order to do that I really have to go back a ways because I actually got started in AI in the late 70s and early 80s when I was back at Berkeley in the Berkeley Artificial Intelligence and Research Group (BAIR) where I was studying computer science. I ended up getting my Ph.D. in artificial intelligence. I went on to do research in AI at GE’s corporate research facility for many, many years and at that point it seems liked AI was all hype and was never going to be something that people could really use. Which is kind of disturbing coming from someone who specializes in the area to think that the technology was never going to be useful by the consumer. But as we’ve seen recently in the last few years, AI has made some dramatic shifts and the thing I’m most excited about is that it is finally reaching the consumer now and doesn’t require the heavy development of complex and custom models and very expensive techniques to implement and that means that we’re really on the brink of seeing artificial intelligence be incorporated into our day-to-day much more and that is very exciting.

That’s near term, but if I can, what I’m really excited about, is stuff that is still science fiction but we’re starting to see some really amazing glimmers…robotic arms that are actually controlled by human brains, having severed brains actually demonstrate electrical activity after the individual is dead. I want to think about how machines will become more like us and even more exciting, by what they’re going to look like when they become more like themselves. So, I think, of course I got into artificial intelligence because the possibilities are really very, very exciting and thought-provoking and it’s great to be living in this time of great technology change.

GW: Yeah, there’s certainly no shortage of the what’s possible and we’ve moved through, I think you’re sort of noting the AI Winter, where we’re like “You know there’s no pony here, this is never going to be a thing. We don’t have the tech, we may never have the tech to make these types of resource-heavy queries and actions”, right?

LR: Well it’s interesting that you use the term AI Winter, because it’s actually AI Winters…and over the last thirty, forty years, it has been very cyclic where we’ve seen a lot of hype and a lot of stories about, oh artificial intelligence is coming and the robots are coming and the self-driving cars and then there will investment corresponding to a lot of the hype and then it will kind of drop off and we’ll enter into this AI Winter which we have had at least two AI Winters in the last few decades and right now I believe we are on a cusp where we are finally going to see some of the promise that the AI hype has been hyping for so long and it’s really a pivotal moment and this time, for the first time I feel fairly optimistic that it isn’t hype, it’s real.

GW: So, this is like when your friends hype how awesome Hamilton is and you’re like “Oh my gosh, this show has been hyped so much, there’s no way it meets expectation.” And then it does and then even exceeds it. Right? So, this is like the Hamilton moment.

LR: (Laughs) Yes, I love it.

GW: I need everything in Broadway terms. I need everything in musical Broadway terms.

Before our audience gets like, “Ok we’re talking about 2001 Space Odyssey situation here”, Lisa I need you to explain what I need to know about AI as if I were the chairman of the board of an executive of a mid to large nonprofit, like what do I, do I need to know that there’s a robotic arm somewhere trying to flip over a cup.

LR: I couldn’t answer that, I assume, one thing I know about nonprofit is that there’s a nonprofit for everything. And there are nonprofits whose missions are to help disabled individuals to function with as few impediments as possible…and in that context, I do think that robotic arms would be something of great interest. But I don’t think that’s what you really were asking. But really, AI doesn’t work without large quantities of data. And so I think that’s the first thing if you’re in a nonprofit to ask yourself to determine if this is something you’re should be considering, is to do an assessment of the quantity and quality of data that you’re collecting because without those large volumes, the technology still will not do anything.

GW: Ok, so let’s go further into this. Give me the crash course, like how to explain AI to your parents.

LR: Well, the classic definition is very accessible. Which is that AI is when the computer is doing things normally thought as as reserved for humans. Demonstrating, I’m putting intelligence in air quotes, but what’s beyond that definition is that as soon as we program computers to do something humanlike it’s no longer artificial intelligence, that’s the joke among us AI people. Is that once we’ve figured out how to program a computer to do something like play Go or play chess at that’s point it’s just an algorithm and there’s nothing special about it.

But I think it’s very important to understand a fundamental aspect of AI and how it works and what is can and cannot do. So, we often talk about a narrow version of AI which is artificial intelligence that can do one thing really, really well. And the example is the game playing examples that I’ve given. There are many examples of what in the past used to be called expert systems so for example a doctor can type in a patient’s symptoms and the AI system will recommend certain potential diagnoses, hoping to help the physician not to overlook all possibilities there and then a very broad AI where the computer can deal with inputs beyond a very constrained problem.

And I’ll just give you a compelling, hopefully compelling timely example of this. For those of you who are aware of what’s going on, or what has gone on in the show Jeopardy in late April with James Holzauer really becoming the richest, Jeopardy winner, the most successful Jeopardy player of all time. In February 2011, IBM has a long history of innovation in AI and they have a technology that’s called Watson and they programmed Watson to play Jeopardy which is an example of the narrow AI that I was just talking about and in the contest, it did win. But it got some of the answers very, very wrong. For example, one of the questions asked for a set of US cities, “Which US city blah blah blah?” and Watson suggested Toronto, Canada. It also stumbled by suggesting that a chemise was a clue from the “Also on your computer keys” category. I bring up those two examples because it really shows that computers are very good at matching questions to large databases of potential answers, but they don’t actually understand anything. And this is a very important concept when you’re thinking about AI. We’re all being sold on this idea of the singularity when computers become self-aware and can start making decisions on their own and learning on their own, but we’re quite far away from that. We’re still in the mode of computers that are just doing what they’re programmed to do and do it in a very sophisticated way and much faster than we are with access to much more information, but we still are very far away from computers truly understanding anything.

GW: Gotcha. So, you’re talking about great examples of narrow AI and then I guess, the alternative to that is broad AI. Are there any examples of that or is broad AI or is that a future far away?

LR: I think that’s, well broad AI actually is here today, and I started out by saying I was very excited about the fact that AI Is finally working and may have real applicability. We already see it, for example, in Netflix, when you see how smart they are about recommending movies to you. I mean this is a commercial application of very advanced kinds of matching algorithms. For broad AI to work, the major innovation that’s been enabling the drive down to consumer use of AI or nonprofit use of AI is the ability for systems to learn without having to have a model of what they’re trying to learn created for them in advance. I know that’s a little bit abstract. But the main limiting factor that we’ve finally overcome is that computers cannot only learn patterns and discover information in large quantities of data, but the AI subcategory of machine learning has allowed computers to learn the models that make that type of analysis possible. And this is really a major breakthrough. It wasn’t a technical breakthrough in that it was really leveraging existing technologies but the computers got fast enough and the data got large enough so that enough calculations, if you will, could be done so that you could set AI up on a large database of data and it would first figure out what the blocks of conceptual information are, for example, it may know that that there is demographic information that goes with people and that forms one concept if you will. It can group together independent pieces of information into larger chunks which, the chunks are what are analogous to conceptual information and it can really then work with that more conceptual information to form discoveries. I know this is a little long answer to your question which was, “What about broad AI?” and with the advent of computers learning their own models, computers don’t have to be time-consumingly programmed by experts to develop these models. They can actually do that work for us which was a pre-condition to the actual analysis process.

GW: And then I hear a lot of people talk about machine learning. Is that the same thing as AI? What is the relationship of machine learning to AI?

LR: So, machine learning is a sub-category of AI. Artificial intelligence has sub disciplines as in most scientific areas. My sub area was in natural language processing which is getting computers to be able to not just read text or understand or parse speech but to actually understand what the text means. Machine learning is another sub discipline of artificial intelligence which is where computers can be set on data and they can learn from that data other information without a lot of human interaction. And other sub areas are vision processing, so many of us have read about China’s incorporation of facial recognition technology, which is based on image processing and that’s another example where artificial intelligence has just leapfrogged into the commercial, in this case, government, arena. And there’s also robotics is another sub area of AI. Machine learning is the most important sub area that has caused AI to be flowing down into the consumer and nonprofit sector because of those advances.

GW: Yeah, so let’s go to our sector. Let’s go and say, ok what are some interesting AI-for-good applications that you’ve seen, that have you excited.

LR: So, there’s a lot of areas where nonprofits can use AI technology. I’m going to go through the examples that I’m most familiar with. We focus, as mentioned, on Salesforce, which is the #1 constituent relationship management system in the world and has a nonprofit version for fundraising and volunteer management and event management and other business processes that nonprofits use. So, Salesforce has been a leader in the incorporation of artificial intelligence into their software and there’s a lot of functionality that comes, out of the box, with some basic configuration to do things like guiding development officers to the best donor leads that they may have, to increase their donor conversion rates and raise more funds. The system can also automatically identify pledges, for example, that were schedule to close, and suggest to the user that they may want to send them an email. It can score participants to determine, for example, determine how likely they are to complete a program and do other kinds of automated forecasting.

So, the AI is really, the opportunities for artificial intelligence for nonprofits are throughout the business processes. It can be used for automatically capturing information from text. You can incorporate applications of natural language processing, for example, to read narrative reports coming from supporters or from program participants. If your nonprofit collects images for example, let’s say you’re an environmental group, you can use vision processing to calculate the receding glaciers and automatically identify deforestation in other countries, things like that. And overall, just trying to discover insights in your data, predicting outcomes, recommending best actions to take to maximize engagement and automating routine tasks are the general areas of most applicability for nonprofits.

We’ve also seen nonprofits start to look into chatbots, which sound bad, because, for those of you who may not be familiar with chatbots, chatbots are basically robots without a body. They’re machines that will connect to users via a chat window and will reply and a lot of times people aren’t aware that they’re talking to a chatbot. So, I think most of us have had the experience when our phone rings and a very naturalistic voice says, “Hi, is this a good time to talk?” And you will say Yes or No and go on with this conversation before we realize that we’re talking to a machine. The chatbots are very similar and they do require some intelligence, but they are able to address routine kinds of interactions with supporters via chat thereby reducing the amount of time your staff has to spend. Those are just a few of the areas that are really exciting and interesting for the nonprofit sector.

GW: I have to ask before we continue. Lisa, blink twice if you’re a robot.

LR: (Laughs). It would be better to know that I laughed because humor is one thing that is almost impossible, I think it’s impossible. We will know when we’ve constructed an AI not by the famous Turing test, which is the way Alan Turing came up with to determine if someone is a computer or robot by whether or not they can pass as a human. That’s why your question was very funny because you were just conducting a Turing test with me to try to figure out if I was a robot or not. But I do think that the fact that I laughed at your question was the conclusive proof that I am not a robot.

GW: That’s exactly what the robots would want us to think but it may be too late.

LR: (Laughs more) But humor is very, very hard for computers to understand and to demonstrate humor.

GW: Okay, so you know I think there’s a lot of potential out there, you know. It feels like a fog of supercomputers and super abilities that are far beyond a nonprofit saying, “I’m going to create my own machine learning algo to do x or y” versus “I’m going to passively use the many tools that are afforded to me through the productization of AI.” And it seems like most nonprofits are beginning to be able to take advantage of what I call the productization of AI. Do you agree with that?

LR: I think it’s still early days and the jury is still out in that these AI tools available from Microsoft, Amazon, Salesforce, and many other large tech vendors have packaged AI tools, we are still in early days in applying them. What we do and what we’re very interested in helping organization assess the applicability of AI to their work and that’s really the first step to pilot or to scope out and to do an assessment because we’ve come full circle. You need to have large quantities of data, good data, for AI to work and that assessment of whether of not your data will actually lend itself to the kinds of processing that AI requires. You’d want to do that first. To do really an assessment of the applicability and probability of success before you invested a large amount in AI.

GW: I think that’s a fair warning, for sure. On the topic of warnings, let’s say I’m working in civil rights. I may already be aware of some of the downstream, on-the-ground impacts of AI and this sort of touches, I guess, into AI ethics. Just because we’ve found a dataset doesn’t mean it’s not a biased dataset. And if we create an algorithm that can simply be biased faster than humans who created that data, is that a good thing or a terrible thing?

LR: Yes, that is the fear, and I’ve been very pleased that there’s been a lot of awareness around the potential for AI to do more harm than good, because you’re absolutely right that if the actions of a group are biased, all the data when run through the complex calculations that AI would perform would simply encode that bias. It would determine, it would reflect that bias in future decisions. And the classic example is in giving out loans, for example, or what was called redlining, where people were not allowed to live in certain neighborhoods or they were steered into certain neighborhoods, creating great divisions in society or were being turned down for mortgages based on illegal attributes. We want to be very careful to reflect on and test what the systems are doing. As a matter of fact, along these lines, it’s been proposed that there be a fourth law of robotics. Arthur C. Clarkman, years ago, came up with the three laws of robotics, the first one was never kill a human, or never hurt a human. I don’t remember the other two, but the fourth one that’s being proposed is that any artificial intelligence must explain what the basis was for any decision it is making as a way to provide visibility into these kinds of potential for bias. Because it is a real issue and there are many examples of systems that have been trained on datasets encoding existing bias. And an interested reader, and we’ll post this link with the podcast, might want to look at a book called Weapons of Math Destruction: How Big Data Increases Inequailty and Threatens Democracy by Cathy O’Neil, which was published a year and a half ago and is really a very, very good analysis of how artificial intelligence can encode bias and lead to some undesirable societal side effects.

GW: The warning are out there, I like the work being done around it, what I’ve seen at conferences as well. So, oh my gosh, Isaac Asimov’s laws also are really firm about this. The law 1, a robot may not injure a human being, or through inaction allow a human being to come to harm. Law 2, a robot must obey orders given to it given by human beings, except where such orders would conflict with the first law, and law three a robot must protect its own existence as long as such protection does not conflict with the first or second laws.

LR: That’s what I said, I said Arthur C. Clark, but yes, it’s Asimov’s law.

GW: I’m a huge Isaac Asimov fan, so I couldn’t let the people be deprived of that.

LR: No, you’re absolutely right.

GW: Moving in this direction, I’m tempted to play the pro/con game, but I’m more tempted to surprise you with the following. You and I are going to have an “idea off”. I’m going to give you a nonprofit and what they do and a cause and you’re going to rattle off, off the top of your head, what you think potential AI / machine learning opportunities could be for that. Does that sound terrifying?

LR: I’m game.

GW: Ok, first up…This is a nonprofit focused on recycling. They collect large amounts of recycling, let’s say San Francisco and they have to bring it all back, sort it all out, and basically it’s a fee-for-service. Is there a use for AI here?

LR: Oh, I think you’d be hard-pressed to come up with examples where I couldn’t come up with some uses, so…

GW: Oh, let’s go, let’s go!

LR: Right off the top of my head, I thought of two areas. One was to detect patterns in the volume and nature of recycling to better optimize the collections. And the second was to analyze the people who are recycling to determine what those attributes are to use them to identify people in those categories who aren’t currently recycling and then try to reach out to them directly. So, it’s about using the data that you have to discover actionable insights that will increase the efficiency and the effectiveness and also just the volume of your work. So those are two areas…

GW: Next up, I’m dealing in education and I’m running an education program for after school kids and I deal with art, I might deal with writing but we’re trying to improve literacy outcomes for young people through our program on the ground, so it could be implemented through Boys and Girls or a group like Art in Action or anything else that is working on the ground with kids.

LR: That’s a great one, that’s actually a classic example where artificial intelligence can predict which child is likely to drop out, what exactly each child might need to get to the next level in their art, to identify, really to get on top of barriers to success before the child is disengaged from the program, to keep them there so that retention and to help tailor the programs to maximize success. So, these types, that application works best when you’re collecting information after they’ve left the program, so you’ll be able to better correlate the specifics of the program to long-term success.

GW: Ok, I deal with a fundraising department, frankly an organization who raises money to solve a disease, be it cancer or other, but I’m on the fundraising side so sure maybe we’re investing in the people in white coats who are analyzing the outcomes, but we’re just writing checks as an organization. How do I use AI?

LR: Well you would use it to be more effective at fundraising by identifying donors who are more likely to, are capable of, giving more, that the system has automatically identified as a good prospect to ask for more money as well as to prioritize and screen new prospects for their giving ability as well so that you can raise even more money. Unfortunately, we’ve seen the impact in microtargeting, for example, in social media in influencing people’s perceptions and in providing really highly tailored information that will appeal to them and that’s all based on AI-type analysis.

GW: My organization deals with creating jobs and we do job trainings, job placements, reeducation, and it seems like AI is just taking all the jobs. How do I use to it stop doing that?

LR: Well, it’s really about matching. It’s about taking hundreds of thousands of pieces of data about individuals, their history of employment, their education, where they live, their skills and hundreds of thousands of potential jobs and correlating that with the success at actually landing the job and staying in the job to better match individuals to jobs. And the beauty of artificial intelligence for that type of application is that the computer doesn’t know that it’s not typical to have women running the steam rollers on the road crew, so it’s going to be very unbiased in matching individuals to potential jobs and surface jobs that a human might not think of and that’s one of again the real benefits. It’s funny because it’s a benefit but it was also a concern. But one of the real benefits is that it’s only looking at the data, it’s not considering biases that humans have, it’s trying to be comprehensive and because it can run through so much data so quickly, it really does have the ability to optimize these matches.

GW: Lisa, I deal in direct service. I look people in the eye, I run a soup kitchen, I run a thrift store, I am working with the elderly in a similar to Meals on Wheels situation, but ultimately you can’t automate empathy where I come from and it’s all about that handshake and the respect I give people in person. I mean, c’mon how is AI helping me?

LR: It’s in the back scenes, it’s definitely in the back office. I am completely in agreement with you that AI shouldn’t replace the human touch. There’s so much that we learn, especially in these contexts, by our own observations of the individuals as we’re interacting with them. But if we’re collecting enough information, we can then identify better ways to further help them in a more automated way, as well as just very small improvements to our operations. For example, optimizing the drop off route for meals for Meals on Wheels. It’s a very hard problem to solve and this technology can suggest routes for you…

GW: Technically AI has not solved, sorry I gotta interrupt, technically AI has not solved the traveling salesman problem.

LR: (Laughs) You know too much, George.

GW: So, I’m calling you out here, they haven’t done it yet.

LR: You’re absolutely right, George. The traveling salesman problem is unsolvable, provably unsolvable but I didn’t think you’d know that.

GW: Not with that attitude.

LR: But it’s still actually can do what’s called satisficing, which is a way to approximate an optimal solution, but you’re absolutely right. That is not a problem AI can solve.

GW: Satisficing sounds like how my dad drives around and claims he’s not lost. He satisficed us through many a situation, I’ll say that. Love you Dad! Alrighty, you’ve done admirably with like literally I gave you no notice of this, but I just like seeing how people perform. Well done, you get one gold star.

Um, ok. Ok,Lisa, before we move into “Rapid Fire:”, is there anything you feel like people should know or be on the lookout with regard to this technology?

LR: Well, I’m biased like we all are. I think that education is important, and I would encourage everyone to do some analysis on whether / if AI could be of benefit to them certainly before diving in whole hog because it is new. But I would want to know, I think it’s important to ask the question within your own organization. The time is now, and you might be surprised by what surfaces.

GW: Ok, are you ready for “Rapid Fire?”

LR: As ready as I’m gonna be.

GW: Brilliant, keep your responses within seconds, thirty seconds ideally. Ok what is one tech tool or website that you or your organization has started using within the last year?

LR: We started using a system called Gather Content for our website design and development projects to assist with content organization and editing.

GW: What tech issues are you currently battling with?

LR: I’m battling with the distraction factor of tools like emails and Slack. I think we’re getting, it’s very hard for us to stay focused for long periods of time because of the plethora of distractions and the 24/7 news cycle.

GW: What is coming in the next year that has you the most excited?

LR: Artificial intelligence for nonprofits.

GW: Talk about a mistake you made earlier in your career that changed the way you do things today.

LR: Well it’s funny, this isn’t a technology thing. But if I could go back, I would toot my horn more. I always believed that if you just kept your head down and did a good job you would get recognized and promoted. With your Broadway show motif, in Rent, my favorite musical, in the Over the Moon song with the cow, she says it’s a female thing, but I do think that it is, in caring about office politics in general.

GW: Can NGOs successfully go out of business?

LR: Apparently, I’ve had at least a dozen of our nonprofit clients go out of business since we started working with them.

GW: To note, was it a successful exit?

LR: Many were, they wound down because their mission was complete or the foundation dispensed with all of their assets, their endowment was spent down, or they merged into a larger group and created some real benefits through that.

GW: If I tossed you in the Hot Tub Time Machine, back to the beginning of cofounding Fíonta, what advice would you give yourself?

LR: If you do what you love, you never have to work a day in your life.

GW: If you had a Harry Potter wand for the industry, what would it do?

LR: Well, in the technology arena, it would provide more support for funding of nonprofit technology and understanding that technology is really a core area of capacity building, like fundraising and governance and finance but in the nonprofit sector overall, I think it should merge more, I think there’s a huge amount of duplicative effort in the sector and continue to focus on really using data to drive impact measurement and view a lot of what nonprofits do as social scientists.

GW: How did you get started in the social impact sector?

LR: Well, it’s an old story. It’s so old it’s almost a cliché. I got to a point in my career where I wanted to feel like I had left the world a better place than when I arrived and I wanted to use all of my experience and expertise to help the social impact space and so I started a company focused on technology for nonprofits to do that.

GW: What advice would you give college graduates looking to enter the social impact sector?

LR: The main thing is to really do a lot of research because more and more and more we’re seeing companies that have social impact objectives, this concept of a triple bottom line, there’s lots of “helper” groups. I think it’s important to take your time when you’re looking for your first job to ensure that you have a good understanding of the social impact space, looking at B corporations, for example, social impact corporations and to just take your time because there’s no end of options out there.

GW: Ok, final question. How do people find you? How do people help you?

LR: So, we’re at I’m lrau, Lisa Rau, at You can go to our website and check us out.

GW: Lisa, thank you so much for sharing your wisdom and putting up with my shenanigans. I think we learned something here and gave some folks a lot of fodder at least.

LR: Thank you, George. This was a wonderful conversation. I’m very impressed. You got me at least twice, so fair game.

GW: I would say, “Who’s counting?” And “This isn’t a contest” but I’m always counting and I’m always in a contest. Lisa, thank you.

LR: Thank you, George.

GW: We’re going to have a lot of show notes for you on this episode number 126, tons of links. If you want to learn more about the practical uses of narrow AI, there’s no shortage of opportunities out there. It’s important to feel like Lisa, we are paying attention to what the downsides can be used if incorrectly, but more importantly beating the drum, getting us excited that no matter what I threw at her, she was able to say “Look no no no, there’s an area, there’s practical applications.” And they’re only getting stronger, you know it’s 2019 and we’re only seemingly at the beginning of this. I hope it got you excited. I hope it didn’t overwhelm you.

Take a look at some of the resources, episode 126,