Artificial intelligence (AI), which enables machines to simulate human thought processes, is growing in leaps and bounds. Indeed, ChatGPT, an AI provider of content, surged past 180 million users in March, according to OpenAI.
Naturally, not-for-profits are taking notice, and some organizations have adopted ChatGPT and other forms of AI into their operations. Not surprisingly, the speed, efficiency and cost-effectiveness of these technologies lay behind many decisions to implement them. However, you should be circumspect before adopting AI. It can trigger a number of moral, ethical and human considerations.
Plus and Minus
On the plus side, AI can be used by nonprofits to create or complement grant proposals, write thank-you notes to donors, and churn out other routine communications. These apps can free staffers to do other things that require human interaction, such as working one-on-one with clients or meeting with potential donors.
On the other hand, AI might be used to support biased or discriminatory beliefs. It has been used to send confusing messages, disseminate false information and distort intentions — sometimes leading to bad press, lost support and lawsuits. What's more, AI could lead to job losses and reduce human interaction. In a worst-case scenario for nonprofits, badly deployed AI could destroy organizations that are otherwise performing important and essential services.
Practical Suggestions
Keeping that in mind, here are several practical suggestions for adopting AI in a responsible manner:
Think about and test it first. Assess risks vs. rewards before you make any long-term commitments, especially financial ones. It may take some time to figure out how to best use AI in your organization. Start by identifying bottlenecks or problem areas where AI could be beneficial. Then implement a pilot program and assess the data. Depending on the results, you may decide to back off and retool or fully commit to AI solutions.
Co-bot, don't robot. If you bring AI into your orbit, you don't have to go all-in. You may want to "co-bot," or have staffers and AI collaborate. Employees supervise and instruct an AI application to carry out automated tasks under a carefully devised procedure. While a machine is doing the grunt work, staffers coordinate activities and uphold standards.
Ease employee worries. The natural first reaction of staffers to AI is to worry about job security. In most cases, jobs change somewhat, but don't disappear when AI is introduced. Nevertheless, your managers should address this issue head-on with open discussions about your nonprofit's direction and the impact AI is likely to have on it.
Revise job training. If you implement AI, it'll likely change the way you train employees. This will be critical for both new hires and existing staffers. Job descriptions should reflect workers' roles augmenting AI within the organization, as well as any new functions.
Retain the upper hand. The idea of AI taking control of an organization's decision making isn't as far-fetched as once thought. As long as employees remain responsible for making assessments and judgment calls, you're more likely to avoid potential problems. If AI gathers information for certain purposes — say, to recommend services to at-risk children — it's important that humans continue to play an active, ethical role.
Minimize the risks. Your nonprofit must be vigilant in monitoring AI tools for risks, such as embedded bias or discrimination. It may be possible to train a bot to comply with your organization's values. Consult with your service provider or an AI technology expert to discuss possible options.
Does It Make Sense?
AI can easily accomplish jobs that are time consuming, labor intensive and repetitive. You shouldn't waste time and effort on extensive research or distribution of thank-you notes to donors when that can be "assigned" to an effective AI tool.