Home BoardEffect South Africa Unlocking the Future: 8 Easy Steps for Responsible AI Adoption

Unlocking the Future: 8 Easy Steps for Responsible AI Adoption

The Stanford Center on Philanthropy and Civil Society recently shared an outline of eight straightforward steps for thoughtful and responsible AI adoption, creating an invaluable source of guidance for nonprofit leaders and board members who see the incredible potential of AI but may feel cautious about diving in. This blog explores the 8 prescribed steps, made for mission-driven organisations, can reveal the way forward with this new technology.

1. Be knowledgeable and informed

The first place to start is to understand before launching into AI adoption. Take the time to understand what AI can and cannot do. Learn how your staff intends to use features like ChatGPT or chatbots. Anticipate potential outcomes and risks before launching on a large scale.

2. Address anxieties and fears

A common fear is that AI will eliminate jobs, and this is important to explore with team members. If ChatGPT can write a 600-word blog, why retain a copywriter? Leaders can calm employee anxieties with straightforward and honest discussions. According to the World Economic Forum, jobs may change, but AI is not likely to eliminate jobs. Communicate this and be open about the place of AI in your organisation.

3. Stay human-centred

Assure your team members that humans will always oversee technology and make the final call on critical decisions. After all, the heart of mission-driven organisations is the team. Introduce AI adoption as a step towards more efficiency, not a way to eliminate jobs.

4. Use data safely

This may seem obvious, but privacy and permission are important in nonprofits. Sensitive information is a critical consideration in AI adoption for mission-driven organisations – especially those that deal with vulnerable communities. Make sure to train team members to use data wisely.

5. Mitigate risks and biases

Design and implement a methodology to ensure that AI tools are safe and unbiased. Anticipate worst-case scenarios before they occur and create the procedures necessary to avoid harm to the organisation.

6. Identify the proper use cases

Nonprofit leaders know how time-consuming specific tasks can be, particularly during fundraising seasons. Think about the initiatives that are not getting done because of necessary (but repetitive) jobs. AI automation can help by allowing more time for more essential activities.

Book a demo to see ai in action with our unique grc software solution that harnesses the power of artificial intelligence to level up efficiency.

7. Piloting the use of AI

Test and retest – multiple times. Before deploying a full-blown AI adoption, create a small, short-term test and let your team members evaluate the results. Were they accurate? What impact did the AI application have on their jobs and time?

8. Job redesign and workplace learning

As AI becomes the norm, be prepared to refresh job descriptions and provide skills training. Produce and circulate an AI handbook with best practices and guidelines unique to your nonprofit.

“Try [AI] and try it on something personal. Treat it as your junior assistant. Give it the information it needs. If you have something complicated, ask it to summarise.” – Ari Ioannides, Board Member, Park City Institute.

AI adoption in charities

Charities and nonprofits are seeing the upsides of AI adoption in their organisations. Yet various reports and surveys suggest that while the interest grows rapidly, the knowledge gap is great. Constant Contact’s Small Business Report reveals that 78% of nonprofit organisations are interested in using AI or automation technology.

However, Nathan Chappell of Donor Search states that fewer than 30% of nonprofits have started using or exploring AI. The Charity Digital Skills Report 2023 reports that of the 100 respondents in a flash poll about AI, a majority (78%) think that AI is relevant to their charity, but 73% feel unprepared to respond to the opportunities and challenges AI brings.

Taking a broader look at AI governance policies across industries and in both public and private sectors, Babl AI Inc., using a three-pronged methodology (literature review, surveys and interviews), found that 64% of organisations do not have an AI governance program in place.

“Judgment is key. The governance team needs to have a process and a policy for using AI.” – Dominique Shelton Leipzig, Partner, Mayer Brown.

The risks of not putting AI adoption policies in place

It is easy to be enticed by the advantages of AI. Who wouldn’t want to use a lightning-speed application to research giving trends from major donors? Or launch a fundraising newsletter in hours rather than days? While there are many benefits, so are the risks, as well, without organisational policies and procedures to govern AI use. Here are some of the key dangers of not putting governance in place:

Ethical concerns over biased algorithms.  AI algorithms may be biased, and when they are, they can further validate prejudice and inequality.

Privacy issues. The protection of personal and institutional data is fundamental, given the proliferation of deepfakes, disinformation, and hate speech.

Loss of donor trust.  When donors lose trust in organisations for any reason, including the misuse of AI, regaining that trust and financial support can be impossible.

Security vulnerabilities. Cybersecurity incidents can derail nonprofits, wreaking havoc on operations, call centres and data management.

Misalignment with mission. Mission-driven organisations must do everything possible to safeguard their reason for being. Unchecked AI that produces divergent points of view or goals can hinder this.

Legal and regulatory compliance issues. Charities and not-for-profits are bound by legal and regulatory compliance requirements. As AI continues to permeate every aspect of society, more regulations will emerge requiring organisations to comply.

Lack of oversight and accountability. The possibility of chaos from data security breaches to flawed decisions and consequences — all based on unverified AI information — can ruin a nonprofit without accountability and oversight.

AI adoption and risk indicators for charities

The Charities Excellence Framework suggests considering these 6 indicators to help determine if your charity is ready for AI or at risk of AI disruption:

Stagnation — your charity largely operates exactly as it did 5 years ago

Ignoring AI — your charity does not believe AI will affect the way you work

Tech avoidance — your charity is slow to adopt new technologies and ways of working

Uniqueness — your charity delivers services or information easily obtainable elsewhere

Paying members — people who pay for your services may find AI helps them save on those costs

Face-to-face interaction — AI use, or a blended AI/human experience, may provide more effective service

Using technology to help oversee AI adoption

Governance technology tailored to the distinct needs of mission-driven organisations, including charities and those focused on social and environmental causes, can support a systematic and responsible approach to AI governance. As we have noted, while 78% of nonprofit organisations are interested in using AI, fewer than 30% have started to explore or use it.

Are you interested in how the Diligent platform can bring your organisation to the next level of compliance?