You may not build or maintain your nonprofit’s tech tools, and you probably don’t use all of them. But when the C-suite needs guidance on strategic technology decisions, they turn to you.
The time has now come to start thinking – proactively and strategically – about employing Artificial Intelligence (AI) in your organization, no matter its size or mission. AI is everywhere, but mindful implementation takes time, governance, and maybe even some organizational soul-searching.
Where do we start?
AI isn’t a solution looking for a problem. It’s a tool that works best when pointed at a specific challenge. Begin by asking questions across your organization:
- Are you open to using AI in your daily work? First and foremost, are your staff ready to integrate AI into their daily business processes? Some audiences may not be. It’s important to figure that out as soon as possible, so that you can educate reluctant team members or change course early if needed.
- What could each department outsource to an AI tool? Repeatable tasks, simple analysis, communications outlines and meeting summaries, synthesized instructions – these tasks can be done by AI so that your staff has time to do the more critical work that requires a human touch. Save human time for making human connections and decisions.
- What’s a low-hanging-fruit way of testing AI? Choose a free tool that is well tested and has strong security policies and documentation. A jack-of-all-trades copilot tool like ChatGPT, Claude, Google Gemini, or Microsoft Copilot can allow your team to try out AI’s capacity. These well-known tools are an excellent way to discover whether your team members will actually use AI functionality, without necessarily involving your actual data.
- How will we communicate this to our constituents? Start thinking about what external communications around your organization’s use of AI might look like. Will you eventually want to share constituent data at any level with these tools? If so, you may want to update your policies appropriately and let your constituents know if they wish to opt out.
What are the risks to consider?
As with any significant technology decision, risk analysis is key. Start with the basics.
In terms of security, first, make sure that your policies are in order:
- Are you taking ample time to review the privacy and opt-out policies of the tools on the table? You should be comfortable not just with how your organization’s data will be used when you ask the tool to do something, but also with what the software’s policies are when it comes to more extensive use of client data.
- Is your team well aware of what tools have been vetted and approved, and, conversely, that tools which have not gone through the defined process should not be in use?
- Write a policy around general copilots and what staff can use AI to do. If particular types of data should not be used in AI models, for instance, have that written and available to all staff who use the tool in question.
Then, check your data. If your underlying data is outdated or inconsistent, AI may only amplify those issues, leading to skewed results that aren’t worth acting on.
If you haven’t reviewed your data in a while, we have some tips on how to declutter.
Everyone who uses AI should be aware of the ethical implications of this new technology. For instance, model sourcing for image AI may not respect copyright, so your images may profit from the work of artists and photographers who have not been adequately compensated.
In addition, generative AI requires significant power to run data centers and water to cool its systems. This requires new power lines to be run through neighborhoods and rural areas, and requires water that may be at a premium in drought-stricken areas. These are issues to consider before implementation begins.
Finally, remember that AI supplements human intelligence, not the other way around. No decisions or results should ever be made or presented based on the output of an AI tool alone. Generally, review by an actual person is essential for successful application and interpretation of the results AI presents. (Autonomous agents are an exception to this rule, but even those agents must be thoroughly tested and appropriately constrained based on that testing.)
Artificial Intelligence can be a boon to nonprofit organizations. By helping staff work smarter and giving your human resources more time to connect with constituents, these tools can make a real difference in your bottom line and the time your staff can spend meaningfully serving your mission. Implementing AI thoughtfully is a first step in realizing these benefits.
Curious how your organization could start using AI? Let’s talk about your goals, your data, and what’s possible.