Artificial intelligence offers an incredible opportunity for nonprofits to amplify their impact, optimize resources, and scale donor engagement like never before. But it must be handled responsibly. In a sector built on trust and equity, it’s essential to ensure that AI tools are used ethically, not just efficiently.
To navigate this landscape with integrity, organizations need a clear framework. This checklist provides actionable guidance to help you choose the right technology partner and establish policies that put governance, transparency, and accountability at the forefront of your strategy. With it, you can ensure AI empowers your mission without compromising your values.
Choosing a technology partner that prioritizes ethics
The first step to using AI ethically at your organization is to choose a partner that shares your values and creates tools made specifically for nonprofits. Ask the following questions to evaluate potential partners and ensure they are prioritizing integrity over speed.
-
Transparency test
Ask vendors to explain how their AI systems work. Can they clearly describe how inputs are used, how outputs are generated, and how decisions are made? Does the vendor publish a public set of ethical AI principles? -
Accountability measures
Ensure the vendor provides tools that are auditable, explainable, and appealable. Can you trace decisions back to their source? -
Bias mitigation
Does the vendor actively address biases in their AI systems? Look for evidence of fairness audits and diverse datasets. -
Privacy and security
Verify that the vendor complies with global data protection standards and respects donor privacy. Do they have clear data usage policies? How does the vendor manage privacy and consent when using your data for training or product development? -
Governance practices
Does the vendor have a governance-first approach, including human-in-the-loop controls and oversight workflows?
Curating your organization’s AI policies
Beyond choosing the right technology partner, it’s important for your organization to have three policies in place to protect your mission, team, and supporters: an AI data policy, an AI input and output policy, and an AI usage policy.
Developing an AI data policy
Your AI data policy governs what data is collected, how it’s used, and how it’s protected. In the social good sector, this isn’t just a compliance issue, it’s a matter of ethics. By having an AI data policy, you’re showing your supporters that you care about their data and safeguarding trust.
-
Data identification
Define what types of data can be used by AI and what’s off-limits (e.g., PII, program notes, case files, etc.). -
Security
Implement strict controls to prevent unauthorized access or sharing of sensitive data. Document whether AI systems can access internal databases (e.g., donor CRM, volunteer platforms) and under what conditions. -
AI training protocols
Require anonymization and aggregation for any data used in AI training or optimization. -
Ownership and consent
Clearly define who owns the data and ensure donors and stakeholders are informed about how their data will be used. -
Donor dignity
Does your policy prioritize respect and trust over mere compliance? Would the donor be comfortable knowing how their data is being used?
Developing an AI input and output policy
The way your team uses AI tools, especially prompt-based systems, can affect both outcomes and ethics. Inputs and outputs should be governed with clear standards to avoid bias, error, or misuse.
-
Data identification
Define what types of data can be used by AI and what’s off-limits (e.g., PII, program notes, case files, etc.). -
Acceptable prompts
What types of prompts are allowed or restricted (e.g., no use of real donor names, health data, or sensitive personal stories)? Provide clear examples of appropriate prompts for common use cases (e.g., “Write a thank-you note for a first-time donor”). -
Bias testing
Are there safeguards to prevent unintended discrimination? Regularly test inputs for biases that could skew results. Conversely, build a feedback loop so staff can report when an output feels off, biased, or inaccurate.
Developing an AI usage policy
The way your team uses AI tools, especially prompt-based systems, can affect both outcomes and ethics. Inputs and outputs should be governed with clear standards to avoid bias, error, or misuse.
-
Human oversight
Require human review for all automated decisions and AI-generated outputs before external sharing, especially fundraising materials or grant proposals. -
Acceptable prompts
What types of prompts are allowed or restricted (e.g., no use of real donor names, health data, or sensitive personal stories)? Provide clear examples of appropriate prompts for common use cases (e.g., “Write a thank-you note for a first-time donor”). -
Ethical guardrails
Embed governance-first design into your AI usage policy. Are there clear acceptable-use policies and fairness reviews in place? Do you require opt-out mechanisms or override options for automated actions? -
Transparency in use
Communicate openly with staff, donors, and stakeholders about how AI is being used. Would you feel comfortable explaining your AI usage to your board or donors? -
Risk management
Continuously monitor AI systems for emerging risks. Are there mechanisms to detect and correct errors before they cause harm? Schedule regular audits to review how your AI tools are performing.
This checklist is more than a set of guidelines. It’s a strategic framework designed to empower your organization to innovate with confidence. By embedding ethics into every stage of your AI journey, from vendor selection to daily use, you protect your mission and the invaluable trust you have built with your community. Embracing these principles ensures that as you enhance your mission, you also safeguard the values that drive your work forward. Responsible AI is not just smart governance. It’s mission alignment in action.

