
AI Ethics for Nonprofits: Aligning Mission with Technology
Mar 18, 2026

n today's technological landscape, artificial intelligence (AI) is positioned as a catalyst capable of revolutionizing the operations of nonprofits. However, when we talk about AI in the nonprofit sector, it's crucial to approach it with an unwavering focus on the organization's mission rather than just the technology itself.
Introduction
Welcome to the Mission Aligned Intelligence platform: a space where technology for nonprofits begins with mission, rather than cutting-edge capabilities. In this piece, we’ll explore the ethical dimensions of integrating AI within nonprofit organizations and guide you on how to deliberate wisely on AI's role in your work.
Different Stances on AI Adoption
AI can efficiently multiply your capabilities, but the pressing question remains: Capabilities for what purpose? As discussed by Alissa Condra and Mike Mitchell of Nonprofit AI, nonprofit leaders often have varying stances on AI—from being early adopters to being justifiably skeptical. There’s also an undercurrent of ethical concern about how adopting AI aligns with the values and mission of the organization.
Addressing Mission Creep
One significant risk in organizations, termed as 'mission creep,' involves straying away from the organization's core mission due to external pressures, whether from influential donors or tempting government projects. Every AI decision should withstand the ethical scrutiny similar to these ongoing challenges, ensuring it serves the broader mission, not just the allure of enhanced efficiency.
Adaptive Ethical Frameworks
Nonprofits, like any organization, already face numerous ethical crossroads regularly. Whether it’s balancing donor influences or deciding on project expansions, these decisions should be informed by a framework prioritizing the mission. AI should be embedded within this process—serving as an assistant to the mission, not redefining it. The criteria for adopting AI should be not what AI can do, but what it should do to align with the core goals.
Aligning Strategy and Practice
It's crucial that board members, executive directors, and staff maintain alignment regarding what defines the organization’s mission. This unity forms the backdrop upon which AI’s role should be evaluated. Align AI efforts with strategic plans to ensure resources and innovations are channelled strategically towards mission-centric tasks.
Mission Primacy and Capacity Management
Start by asking whether AI advances your mission or is just an operational improvement. Operations should always serve the mission. Freeing up capacity via AI should present an opportunity to focus more on mission-critical tasks. This requires intentional planning, as unallocated capacity can lead to inefficiency. Evaluate AI processes by aligning them with strategic goals and redirecting any gained capacity towards high-value, mission-oriented efforts.
Cultural Integration of AI
Changing the culture within an organization to support thoughtful AI adoption is as important as the technology itself. Engage staff at all levels in these discussions, fostering a mission-first mindset. This cultural shift can ensure that AI implementation builds not only on technological advancement but on the very intelligence and creativity AI aims to bolster.
Conclusion and Call to Action
Lastly, for nonprofit entities exploring AI tools—reflect on the two primary questions: Is this tool working in favor of our mission, and how do we plan to utilize the additional capacity it provides?
As you navigate these challenges, remember the ultimate goal is mission alignment, not simply technological adoption. By doing so, AI becomes a tool of empowerment rather than a point of contention, allowing nonprofits to say no to misaligned innovations and yes to those that genuinely advance their cause.
In the forthcoming conversations, Alissa and Mike will further explore data and privacy issues related to AI—essential listening for organizations aiming to protect the intelligence of their programs without compromising user privacy.
Get Involved
Join Alissa and Mike from Nonprofit AI weekly for insights and experiences in navigating the complexities of AI adoption in nonprofits. Engage with fellow community members and share your experiences as we grow intelligence together. Your insights and stories enrich the conversation as we collaboratively reimagine nonprofit technology practices.
For further engagement, connect with us at nonprofit.ai. Spread the word, subscribe to our discussions, and help us foster intelligence within the nonprofit community.
Thank you for being part of this essential journey towards mission-aligned technological integration.
——————————
Full Transcript
AI Ethics for Nonprofits: Keeping Mission at the Core - Mission Aligned Intelligence Episode #3
[00:00:04] ​Hello, this is Mission Aligned Intelligence, a podcast about AI for nonprofits that starts with mission, not technology, which is the topic we're gonna dive into a bit more today. So today is first of likely many episodes about the. Ethics of using AI in nonprofits and the tensions that, that we've seen, on how to evaluate why and where to use it.
[00:00:28] Because we have all heard that AI can dramatically increase your capacity, but the question we wanna dive into today is the capacity for what? , I'm Alissa Condra here with Mike Mitchell, and we are at Nonprophet ai, bringing you practical ways to use and think about ai. We're not here to tell you AI is amazing, or AI is scary, but rather to help you think through those different tensions that come up.
[00:00:56] So Mike and I have been talking to nonprofit leaders for over a year now, and even people outside of nonprofits, and I think they generally fall into a few categories on ai, early adopters. Curious, skeptical, or sometimes just terrified. But even those that aren't terrified, including us, have some fear and worry about the ethics of AI as a whole, and certainly within their organizations.
[00:01:27] So what we wanna discuss today is what questions nonprofit staff should be discussing when deciding what and how much to have AI help with or to bring AI into the organization. The good news is that AI is not the first place where nonprofits have to wrestle with these tough questions and ethical issues.
[00:01:49] You already do it all the time and just like everything it's something that needs to be thought through with care. So Mike, you've been thinking and writing and speaking a lot about this lately. So why don't you kick us off? How should nonprofits be approaching this question of. The ethical choices of ai, when deciding if and how to adopt it.
[00:02:13] Yeah, thanks Alissa. I think that there are a number of things to think about when you think about a i a and the first thing is that it's,, should be thought about in the paradigm of other ethics risks in organizations. And so it might be good to just take a step back if you're running a nonprofit or if you work for a nonprofit, that constant ethics risks all the time.
[00:02:34] So the first one is mission creep. Organizations face mission creep regularly. The way it's supposed to work is there's supposed to be clear lines around the mission, but the way it actually works is actually quite subject subjective. So you might have a strong ED and a weak board where the ED drives what is quote unquote mission, or the vice versa is you have a strong board of weak ed and the board is driving.
[00:02:58] Mission. And so they all decide whichever's the stronger what falls within mission and what doesn't. But that's not the way it's supposed to work. And there are other ethical challenges related to mission. So another one that we've all heard of is donors. Donors give you let's say you're a $3 million organization.
[00:03:16] You have a donor that comes along and says, I'm gonna give you a half million dollars, but I want you to do X, Y, or z. Does the organization take the money? Do they not take the money? That's a real ethical question and a real mission risk. And I think the, that the final, and this may even be a version of the second, is competitive scope expansion.
[00:03:37] So it may not be a donor that comes along that wants you to do something, but it might be. An opportunity from a, let's say a government offers an RFP or a foundation offers an RFP and the executive director and the board are thinking, gosh, we ought to go for this because we could really broaden our work, get more money, et cetera.
[00:03:57] So every AI adoption decision should begin and end with mission. But it's important to see AI as not something that's new, to be considered as part of the ethical landscape. And that's why, people should see AI as a part of the conversation. That's been a part of the conversation for nonprofits since they were born not as something new.
[00:04:20] Yeah, that makes a lot of sense. My background is more on the corporate side and there is some parallels there, like having to, make decisions based on shareholders, for example. But I'm wondering in nonprofits that you've worked in where you got to see this really closely, if you, if there's certain things that make. I organizations may be better at wrestling with these types of issues or questions is there some that you've seen that do it really well? Or do it really poorly? Or is it like just something that everyone has to deal with every day, all the time, no matter how well set up they are.
[00:05:04] Yeah, I think there are three things. I think one is there needs to be board and executive director alignment over what falls in and outside of mission. The second is that, the staff needs to be bought into what that mission is. And then the things that you're doing should be constantly tested against a strategic plan.
[00:05:24] Most organizations have a strategic plan, and even if they don't there's some version of a strategic plan every year when an organization passes a budget. So these are some of the things that should be thought about. Yeah. Yeah, that makes sense. Bringing this back to ai, this is.
[00:05:40] I think what we're saying is the human and the organizations should be thinking about AI in the same lens. So the human, the humans driving the ai and deciding does this AI tool and what it can achieve, fit into that strategic plan, for example. So drive not AI driving the program, but the the organization making that decision strategically first.
[00:06:05] Yeah it's not what can this tool, this AI tool do, but what does the mission require and does the tool serve it? So it's not, can AI make us faster? It, should we be doing this at all? And if so, does the AI help us do it better? For the people that we serve or the cause that we serve. And this sounds obvious, but it's easy to get swept up in the promise of technology.
[00:06:28] And AI is just a perfect example of it. That, which promises you that three times more, your output is gonna happen if you just get this technology. But no one's asking output of a what? Yeah, exactly. And I think. At some level, we're guilty of this as well. We start the conversation with our clients by saying we can help increase your capacity reduce staff burnout by getting some of the, repetitive tasks off your plate.
[00:06:56] But one thing we've been really trying to be very conscious of is taking that time to sit with the clients, get. Get them on board, sit with their staff and really inventory where their time is going and what those mission critical tasks are that they want to be spending the most time on. So what do they feel is the best use of their time from a human standpoint, from a relationship standpoint?
[00:07:21] And what is the support tasks that really AI can help with? Because I think what you said is right and doing the wrong things faster doesn't serve anyone. Yeah, it's mission, not efficiency is the measure. And it's not a new question for nonprofits. They face this all the time. I'll just give you two examples.
[00:07:39] Habitat for Humanity there's this constant question of, do we. Hire vendors to build a house faster that builds more houses. So theoretically it provides more affordable housing, but at the same time, there are fewer volunteers that are working on the houses that get exposed to the issue.
[00:07:58] So there's no clear black and white line. And it's really trying to figure out where that is. Another example would be, using technology to generate more funding applications. And that is an example of what is the purpose of doing more applications?
[00:08:16] Are you just pushing more stuff out the door? And is, are you gonna win a bunch of awards that were generically written that don't have real clarity? And is that gonna box you in as an organization to do things that are not necessarily, again, going back to the mission? So that's really important. Yeah.
[00:08:32] Yeah. So you're talking about something that we've talked about quite a bit as well. You're the framework that is a part of a lot of what we do and I think it's really important to underscore here which includes the idea of the mission primacy test. So this is part of the larger mission aligned intelligence framework which I do wanna spend some time in other episodes to go through.
[00:08:56] But starting with this idea of mission primacy, can you introduce that framework and the, some of the questions that we try to get organizations to think about before adopting ai. Yeah so the first question is, does this advance our mission or does it just make operations be easier?
[00:09:14] So that, that would mean is this really just something that's tactical, that is just an operations thing, or does it really advance mission? So these aren't always the same thing. Operations serve mission, but mission doesn't exist to serve operations like nonprofits don't exist without. A mission.
[00:09:32] And that is why they do exist. And if you can't draw a clear line from the AI use case to mission impact, you should pause. The second thing is what will we do with the capacity that we free up and does that serve the mission? So an example would be if you free up, if you have a staff of eight and you use ai and that frees up, four hours for each of them a week.
[00:09:56] That's essentially having another full-time equivalent. For a whole month. And so this is where most organizations don't plan ahead. A saves time, but where's the time go? And if you don't decide that intentionally before applying it. You're not really making a change.
[00:10:14] It gets absorbed by more of the same. And so the goal is to redirect capacity to the high judgment, relational work that actually moves mission. And that's, why I think we're doing what we're doing behind this whole philosophy of mission aligned intelligence. Yeah, exactly. And I think this is something that AI users as a whole are realizing as well that even on a personal level, if you are using AI to make certain tasks faster what happens naturally if you're not intentional about it is you just do more of those tasks, but.
[00:10:48] I'm stepping back to say, is this actually aligning with my goals? Is the same thing that we're suggesting to, to organizations is this advancing the mission? Yeah, and the way to think about it, I think is if as a first step, even before you're asking about what to do with that extra capacity you as an executive director or you as a senior staff member should be.
[00:11:12] Taking staff members out to lunch to talk about that and get them to think about it so that you're not just changing the actions, but you're changing the culture. And once the culture changes, then people are gonna use that time in a way that's more effective. And it'll all go back to being mission aligned, hence the culture being aligned.
[00:11:31] And then you're using and applying the intelligence of those staff more creatively. Yeah, exactly. Yeah, that makes sense. And one thing I wanna add here, maybe we can talk about this a bit too. A big fear that people tell us is that AI is here to reduce staff. And they even say it to our face, we're, you're here to get rid of people.
[00:11:52] And that is never what our intention. And really, I don't think there's very many organizations that are even close. To that level of efficiency. I think your example of, if everybody can get four hours back in their week. That, that's something that they can then use to advance the mission.
[00:12:09] That's not. Enough efficiency that it's gonna mean like mass layoffs in, in some of these, small organizations. So I hope when people hear you talk about this framework they're realizing that by evaluating critically what AI can do and where. This new capacity could be better spent.
[00:12:29] They can truly free up their existing staff to do the work directly towards their mission. Tasks like building relationships and serving their people in a way that AI will never be able to do. Absolutely. Absolutely. Okay. We've been trying to leave people with something to do this week, and I think those two questions you asked is the perfect sort of homework.
[00:12:54] Pick an AI tool that you're currently using or considering using and run through those two questions we discussed today. So is this advancing the mission or is it supporting operation? And then. Where. Would that freed capacity actually go? So if it can do all of the things you're planning to do with it what will you know, that staff member or your team do with that existing time?
[00:13:24] And the answer could be, go home before dinner, that's fine too. It doesn't have to be more, work that you wanna put on someone's plate. We've heard from a lot of nonprofit staff that, they're working unpaid overtime because they are so passionate about what they're doing.
[00:13:40] Have those discussions make this. Make your organization have that culture of openness and being able to talk about what they, what you wanna bring in and how that can really affect the organization in the right direction. Talk about those two questions. Write down the answers.
[00:13:58] Actually write 'em down so you can think about it on your own and then bring it to your team. I think taking, 15 minutes. 30 minutes to ask these questions of yourself and discuss them with others could save you from bringing in tools that don't get adopted or don't serve the mission or just yeah, just get you to do more busy work or more work that is moving you in the wrong direction.
[00:14:21] Yeah. And when you put mission first, AI becomes a question of alignment like we've been talking about, not just adoption, which is actually freeing. It gives you permission to say no to tools that just don't fit. And it helps you recognize the tools that are genuinely gonna serve the work, which is really key.
[00:14:38] Yeah. So next time we want to. Continue talking about some of these ethics and tensions in adopting ai. So I think a logical next step would be to go into the data and privacy tensions around AI and talk to listeners about what we've heard and what they've probably been worried about or experiencing in their own organizations.
[00:15:01] So it's really about. How to build intelligence about your programs without exposing the people in them and why the most valuable thing AI can do is connect the intelligence in your organization that you already have, that your staff members already have. So if you've ever felt like your staff know things that are not captured in systems this will be the perfect episode for you.
[00:15:30] So thanks everyone for listening to Mission Aligned Intelligence. We are Alissa and Mike from nonprofit ai. Find us nonprophet.ai if you're navigating these questions in your own org or we'd just love to hear what you're learning. And if you found this interesting, please send it to a friend. Share it with a coworker and we'd love your help in getting the word out because we are about building intelligence in the nonprofit community.
[00:15:56] Thanks for listening. Thanks, Mike.
Read more articles
The Prophet of Many for the Mission of One. Yours.
Copyright © 2026 Nonprophet Advisors







