
Navigating Data Privacy in Nonprofits: Balancing Mission with AI

Welcome back to Mission Aligned Intelligence, where we delve into the intricate world of AI for nonprofits, always ensuring mission takes precedence over technology. Today, we venture into a critical topic that's as complex as it is essential: data privacy within the context of AI adoption by nonprofits. While last episode focused on mission-based AI adoption, today’s discussion navigates the nuanced tension between leveraging AI and safeguarding sensitive data.
Unveiling the Data Dilemma
The evolving landscape presents nonprofits with a dual challenge. On one hand, funders demand data-driven narratives to showcase impact—narratives that AI can certainly help construct. On the other, the very data making these narratives compelling often requires staunch protection due to its sensitive nature. We explore this dichotomy: AI aids in synthesizing program data into impactful narratives, but it also processes sensitive beneficiary outcomes, demographic information, and case histories. The crux isn’t just about using AI to demonstrate impact; it's about leveraging this intelligence responsibly without compromising participant privacy.
Mission-Aligned Intelligence: A Three-Box Framework
Our discussion led by Mike outlines a framework he calls the "Three-Box Approach". Understanding data in this framework involves:
Program Delivery and Impact: Insights about direct beneficiaries.
Organizational Learning: Lessons learned stored as organizational knowledge.
Donor Communication: Compelling narratives constructed from the same data set.
This approach, however, demands dissecting the use of data across different organizational levels while ensuring equitable benefit sharing, especially avoiding biases toward donor interests over participant privacy.
Ethical Reflection: Questions Nonprofits Must Reflect On
Achieving equilibrium between mission fulfillment and data ethics involves asking:
What’s the minimum data necessary to collect while respecting participant privacy?
Can we proudly disclose to those served how their data is being utilized?
How do we balance between fulfilling donor demands and maintaining ethical standards?
Deciding to collect data because it’s necessary—not merely possible—is a philosophical stance that demands reflection and discussion among all stakeholders, especially when AI is involved.
Safeguarding Data: Practical Considerations
From anonymizing datasets to scrutinizing vendor privacy policies, safeguarding data demands diligence. Nonprofits should strive for an enterprise-level understanding, ensuring leadership, legal counsel, and board oversight collaboratively steward this responsibility. Moreover, using AI within ethical confines involves:
Adopting team-oriented AI plans ensuring no individual data becomes part of larger training pools unknowingly.
Establishing clear guidelines and training for staff regarding responsible and informed AI usage.
Conclusion: Moving Forward with Awareness
Nonprofits are encouraged to evaluate current data practices, integrating AI with intention and broad organizational involvement. By fostering open discussions about data privacy, organizations respect the trust of their beneficiaries while effectively leveraging technological advancement. In our upcoming conversation, we’ll expand this data ethics dialogue to internal dynamics.
We'll address how AI integration impacts staff and volunteers, ensuring technological advancements respect their expertise and contributions. Thank you for exploring this vital conversation with us. For those wrestling with similar challenges, your insights and experiences are invaluable.
We invite you to join our ongoing dialogue at nonprophet.ai and share your journey toward conscientious AI integration.
Full Transcript:
[00:00:05] Welcome back to Mission Aligned Intelligence, the podcast about AI for nonprofits that starts with mission, not technology. I'm Alissa, here with Mike, and we're Nonprofit AI. Thanks for joining. Last episode, Mike, we talked about using mission as a filter for AI adoption, and we left everyone with two big questions: Does this advance our mission or just make operation easier, , in regards to choosing w- when and what to adopt in terms of AI tools?
[00:00:40] So today I want to stay in this ethics conversation, but go somewhere a little bit harder in, in our experience with some of our clients, and that is data and privacy. And I don't know if I should say this upfront, but bit of a spoiler, we don't have a clean answer for what you should do, so this podcast, as usual is here to help start those discussions because there is a real tension right now in, in how nonprofits are being asked to use AI.
[00:01:10] So that's what we wanna discuss today. The tension within nonprofits where funders want data-driven proof of impact, which AI can absolutely help with but data that makes those narratives compelling is exactly the data that demands the strongest protection. So for example, AI can help synthesize program data into narratives and dashboards and reports but it can also include,, beneficiary outcomes, case histories, demographic information, sometimes really sensitive stuff.
[00:01:43] So the question isn't, , whether to use AI to measure impact, it's how to build intelligence about your programs without exposing the people in them. So Mike, I know this is something that you have been thinking a lot about and even since you started Nonprofit AI over a year ago now where do you start with, when you're sitting with an organization on this? Yeah, I would say, Alissa, that I would look at this in three boxes. So I would even frame data as intelligence when we think about mission aligned intelligence, and I would say there are three intelligence boxes. So the first box is the intelligence about the program delivery and impact for the actual clients or people served, beneficiaries.
[00:02:33] And what do they get out of knowing that intelligence? They really don't because it's in the second box, which is for the organization. So the organization is learning this intelligence and it's building a set of lessons, and that's those lessons in that second box are a piece of data that's owned by the organization.
[00:02:51] And then the third box is actually for donors, and it's creating a message for those donors. And even though it's one set of data, the data is used in three different ways, and it's really important to think about how we collect that data and how that data informs the benefits to the individual, the benefits to the organization, and the benefits to a donor.
[00:03:17] And also frames how whether or not something is being taken away from the first box. So it is something we should always pay attention to. And as you said at the beginning, this is not something where there's a really clean cut answer. Every organization needs to be thinking about it as they go through the journey because you're never really gonna arrive at an answer.
[00:03:38] Yeah, that makes sense. And I think the benefits not always aligning i- is an important point. And I think as we talked about the pressure to generate that impact data to, to be, to help the donor's box or the funder's box is, I don't have as much experience it, with it, but that can sometimes be a really big driver in some of these decisions on what systems to adopt or what, processes to adopt.
[00:04:05] So would you say, this is something that comes up a lot i- in terms of pressure from those first few boxes to generate the data or collect the data on some of those on the third one, like you said? Yeah, I would actually say the pressure comes from the donors, from the third box- Yeah
[00:04:27] to know what the story is. But the pressure for making change on mission is in the first box. So how do you balance the first box with the third box to make sure that data is shared equally in a way that's just to the client but helps them benefit, and in a way that helps the donor feel like they're making a difference by giving resources and but not exploiting data and exploiting what people might be sharing?
[00:05:00] So the donors are asking for the data. The boards are asking for it, but they know, they, they know they want to be able to report back to it. But they're the ones that are maybe caught in the middle of what is or really I guess it's more of the program staff that's caught in the middle of what is really appropriate and safe to be able to report back and not, disenfranchise our participants in a wa- in a way.
[00:05:25] Yeah. If I can share a specific example. Now, this goes back some years which is probably safe for confidentiality because it does go back years. In the mid 2000s, there was a big effort that the federal government did called Welfare to Work, and what they were doing was pressuring a lot of people to go to work that were receiving benefits.
[00:05:47] And I participated, worked for an organization that was leading the effort, and my job was to actually connect low income folks to jobs that were more or less in the suburbs, and there was a lot of pressure to get data. Now, back then there was no such thing as AI and so you had hundreds, thousands of sheets of paper, and it was a big frankly, game to prove that you got more placements than some other nonprofit into jobs, even if those placements weren't what was best for the client.
[00:06:21] And that, that kind of thing, that pressure fell on nonprofits. And so that's an example where data and collecting data cannot necessarily or is not necessarily helpful to the end user if it's used in the wrong way. And that's what we want to avoid through our work in mission-aligned intelligence and how we view intelligence as a connector across different parts of an organization and in the context of this conversation, the safety and the protection of the privacy of those people who are being served.
[00:06:55] Yeah. That makes sense, and I think we haven't- Even really spoken specifically about AI yet, but I think there's still, I think, another level that I wanted to get into related to, your example. , This is not something that necessarily AI has introduced, but there is- ... there are certain things about AI that make it more complex, which I think we'll get to next.
[00:07:14] But first thinking back to what we spoke about last week the mission primacy test where do you see, that sort of tension in the data? How do you see the mission primacy test applying to data? The reality is that mission tends to be really clear to outsiders, but inside an organization, mission, in quotes, will mean different things to different people and how it's applied.
[00:07:42] So to frontline people, meeting mission is seeing an outcome for the people being served. But mission for people are, that are higher up in the organization that are concerned about making sure it has enough gas to keep moving along and doing its job there's a different set of pressures. And that tension that you referred to at the beginning is dangerous and needs to be managed with a thoughtful intention and process before you even start doing the work.
[00:08:14] Yeah. Yeah, definitely. So I think related to the questions that we asked about, is this, assessing an AI system or really any system that you bring, any tool or system that you bring into our organization, I think when you talk specifically about data, I think the questions, sounds like, still apply in a very similar way.
[00:08:37] So what is the minimum amount of data that we need to collect in order to tell the story or in order to satisfy the reporting requirements that the donor is requiring? So the minimum data is the part that I think is what you're saying is the part that, y- doesn't have a clean answer and you need to discuss in regards to, the mission and where does that apply.
[00:09:02] So not all the data that you have about a certain program or individuals, but what's actually... Sitting down and actually deciding what's actually required and how can we keep people safe as we're reporting it. Yeah. Exactly. I would even, just going back to the example that I shared different pieces of data are relevant to different- in parts of an organization, different pillars in an organization.
[00:09:24] So in the example that I shared the CEO of the organization might just care about how many placements of people there are, so they can give a big number to donors. But to the frontline staff, the bigger question is, why did Jane Doe get a job here, but, Bob Smith did not get a job there, even if on paper they had the same qualifications?
[00:09:49] And you have to look at many levels of data to begin to see what the message is with that. Does it have something to do with the program of the nonprofit not delivering for the client that didn't get the job, or does it have something to do with the background of the client? And so you can see how deeper levels of data inform outcomes and create different interests within an organization when they think about data.
[00:10:14] Yeah. Yeah, that's interesting. And I think part of the story about finding that data and keeping it safe as you're then getting... using your reporting requirements would be, who has access to that data, and through what systems? If you're reporting, you can y- if you're creating presentations and sending reports you can filter that data in a way that can keep s- certain private information safe.
[00:10:44] But if you're giving people access to, third-party systems, that could expose, possible different data points that might not need to be exposed. Yeah. And as you and I have discussed the privacy policies of vendors of different types of software vary wildly. And given that there is so much fine print, it is difficult for a layperson to read through all of the fine print to know what they're really saying in legalese, and makes it even more difficult.
[00:11:17] It's almost like you need to run a software vendor's privacy policy through AI to say, what does it mean in cl- you know, plain English. Yeah. And and then go from there. Because you wanna use software from a company that you can trust, and that's a big emerging issue in the whole field that you and I work in.
[00:11:37] Yeah. Yeah, and that's a good- That's a good use case or way somebody who is using AI now or looking to start, that would be a perfect place to, to start with AI. So putting in potential partners, potential vendors into AI. Say, look at their privacy policy, and then, putting your own parameters in there does this align to our mission?
[00:11:58] And here's our mission statement. Does this align to our strategic plan or, anything else that you wanna share with it. I think that's a really good idea. Yeah. Yeah. One thing you also mentioned when we were preparing for this episode that I thought was really interesting is a question to ask yourselves as an organization of would be- would we be willing to tell the people in our programs exactly how their data is being used?
[00:12:28] So similar to sometimes the software vendors can hide that sort of information. When you're signing up for a new website, you check the pr- the accept terms and conditions without ever reading it. If somebody put, your data policy or exactly what you're doing with their data into AI and say, "What's happening with my data?"
[00:12:46] Would you feel comfortable for them knowing that detail of what you're doing with their data? I think that's a great a great approach to thinking about how organizations approach this. And I think what you've basically kicked off is a new idea that nonprofits should follow, because n- none that I know of actually does that.
[00:13:07] Or if they do, they mention it at a very high level, and typically the people being served don't even know, have context, know what all of that means. And so that's really a good question. Yeah. Yeah. If you're sitting with somebody face-to-face and explaining what happens behind the scenes, would that be something you feel, confident and proud to, to tell somebody regardless of, where and how you're using it?
[00:13:33] And just another anecdote. If the pressure on nonprofits to raise money, they wanna tell good stories. And I don't know if it was Napoleon or who said this, but "One death is a tragedy. A million deaths is a statistic." If you're pulling together lots of data from many different people, it's a statistic, but foundations also want the individual story that will then show them that they have made a big difference, and yet that individual story, the client may not want their personal story shared.
[00:14:06] Yeah. Yeah. So being able to say to disclose that, to have clear opt-in, being able to say- this could be used in this way or ask their permission before doing it. Exactly. I think this shows up for, companies as well. They might have missions and value statements on their website and maybe, certain assumptions that people are making by doing business with them.
[00:14:30] But you hear all the time in the news about things like data breaches or personal information being shared with, third parties. And yeah, it can be disastrous, but there's so many that you probably don't, don't even hear about of, who knows what they're doing with that.
[00:14:44] So I think as nonprofits specifically and in, the context of what we're doing, like making that whole conversation and system development about data is so important, especially now. Yeah. You and I have talked about organizations having an AI policy. It should be sitting right next to their confidentiality or privacy policy, and the two should be aligned with one another.
[00:15:10] Yeah. So we haven't, So we've been talking about data outside of AI, because you're right, this applies to all information you're collecting and reporting on, whether or not it's, on a piece of paper or in an AI system. But maybe we if you want to kick us off, like how should organizations think about putting data and their information into systems that AI can touch?
[00:15:33] How should that affect, anything based on what we've already started talking about today? Yeah. I think the main thing is to make sure the data is anonymized before it goes into any system. So they should have tools that take a list of names and turn it into a list of numbers that only the nonprofit knows what those numbers represent.
[00:15:54] Maybe a number that reflects an individual that they have served. The second thing is that they should have an administrative person or a financial person or someone that's a COO review any privacy policies in tools that they are using. And an even deeper step would be making sure that a governance committee of the board or a part of the board committees that deals with privacy and policies should pass and give guidance on this, including a lawyer, to make sure that things are protected, and ethics need to be kept into account.
[00:16:34] The third thing that I think needs to happen is that organizations need to ask the question that you asked a few moments ago is, "What would a person think if they knew you were sharing this?" And that, in the end, might be the ultimate standard because with- without it, and if you're pretending that they won't care you're not being authentic with yourself, and you're not being a- aligned with your own mission that aims to protect and serve some sort of, some group of people that are vulnerable, for example.
[00:17:06] So those are three things that I think should be thought about. Yeah. Yeah. And I would add one more maybe more kind of basic practical one which we have told everybody in our trainings we've done so far, which is if you're just getting started and you're just in the organization using a Claude or a ChatGPT or even Gemini, Copilot, things like that if you're not on a paid plan, your data is used to train new models, which means information from your organization is going into, a big pool of knowledge that, that could show up in, in future versions of these tools.
[00:17:47] So if you're on a paid plan, and you often have to go to settings and explicitly find the option to turn off "Use my conversations to train new models," and if you haven't done that, do that right away. So that's part of, I think, setting up an AI policy. You would make sure, you're training your staff to not be using their personal, Chat, AI accounts for things using a team chat ideally, like a Claude Teams or a a Copilot Teams, and ensuring that all those sorts of settings are turned off.
[00:18:23] Good point, and very valid. I don't think most people realize that. Yeah. We have found that in some of the trainings we've done. There, there's a few things that we that come up every time, and definitely, the feeling about i- is this really private comes up. And it's hard because anything with software in the cloud, y- your data is
[00:18:44] Really, it is in the hands of some of these big companies. But there are steps that you can do to protect it, and getting on a team plan i- is part of it. And, w- working with someone to do, trainings or set up systems can help as well, so you can have more personalized assessment of what your staff are doing and where could that data be going.
[00:19:06] And I've been reading recently that enterprise plans for some of these AI companies they have a very hard lock about not using data in a big way. And I think it's important to note for nonprofits considering that as a value because these big AI companies depend on their enterprise accounts, and they don't wanna jeopardize the brand that they have by, using that data in the wrong ways.
[00:19:33] So it's worth nonprofits thinking about full-fledged enterprise plans. Yeah. That's definitely true. So I'll try to summarize our recommendations, but jump in if there's anything else that I've missed. So here's what we want you to try this week. So pick one place in your organization where program data is being collected maybe already being used with AI or pressure to start.
[00:19:57] Could be impact reports, could be a chatbot could be a tool that a vendor is pitching you, for example. And walk through the questions that we discussed today with your team. So what's the minimum data we actually need and need to share? Who has access, and through what systems? If there were a breach or this data would get out somewhere, who would be harmed by that?
[00:20:22] And are we collecting it because we need to or because we can? And I guess the other one that we, that you had mentioned, Mike would we be comfortable if everyone affected watched us do this or knew what we had collected about them behind the scenes? So you don't have to come out of, that meeting with a final policy, but the point is to have that discussion and start talking about that so that it's not happening in the background or in the hands of a few people without everyone in the organization being on board.
[00:20:56] Yeah, you summed it up really well. So next time we want to keep going with this ethics thread but turn it towards your team. So we talked about mission we've talked about data, and the next layer is the people inside your organization. So staff, volunteers the folks doing the work every day.
[00:21:17] What does it mean to introduce AI in a way that respects them, gets their input, and doesn't sideline their expertise and their intelligence? So one thing that we've found in our clients that we're speaking to is how do you avoid the version where AI gets handed down from leadership and the people who actually do the work feel like something is being done to them instead of with them?
[00:21:47] So the biggest part the biggest thing with AI that we've seen is the same with, a lot of these new technologies and the change management and the human part of the adoption is the biggest hurdle in a lot of cases. Especially for these organizations who wanna make sure that they're doing right by their staff and bringing everyone on board.
[00:22:07] So we'll spend some time with with that topic next week. And thank you so much for joining us. Thank you, everybody. And don't forget if you're working through these questions in your own organization, we'd genuinely love to hear what's coming up for you. If this was useful, send it to someone else that you know who is wrestling with these same questions.
[00:22:31] And visit us at our website anytime, nonprophet.ai. .
[00:22:35]
Read more articles
The Prophet of Many for the Mission of One. Yours.
Copyright © 2026 Nonprophet Advisors







