In the months since we brought together members of nonprofit and philanthropic organizations to better understand the potential of artificial intelligence (AI), particularly generative AI, to drive economic opportunities for lower-income communities, we have continued to track developments in the field. Our conversations with these organizations have highlighted ways that some nonprofits are using AI to improve direct service delivery to their clients and the potential pitfalls and risks that arise when deploying these powerful tools with vulnerable populations. Key themes that emerged in these conversations include the following:
Nonprofit and philanthropic leaders expressed both excitement and caution around AI.
Philanthropic leaders were eager to support investments in nonprofits to expand their own capacity and unlock new ways for nonprofits to serve their communities. But they also recognized the risks and unintended consequences that could arise if AI was not implemented thoughtfully with robust guardrails, including governance structures to protect sensitive client data.
Leaders agreed on the importance of developing policies to guide responsible AI use.
Kevin Barenblat of Fast Forward has written about the importance of establishing policies for the responsible use of AI, noting that without a good AI policy and guidelines, nonprofits could be at risk of “misusing AI, violating donor and beneficiary trust, or simply failing to harness the technology effectively.”1 During our discussion, Nick Arevalo of Tipping Point said that organizations he works with in San Francisco are “building out a task force of folks throughout the organization—from the frontline staff members all the way to the leadership team and their tech folks—to think through what [their] governance structure looks like as their [AI use evolves to see] if it’s still staying within the bounds of ethics” they have established.
Isla Lund of Larkin Street Youth Services, a nonprofit that serves young people experiencing homelessness in the Bay Area, also shared how their organization intentionally involved case managers in their design coalition to encourage collective learning and buy-in: “Our case managers are deeply involved in the design team. We started formulating the design team, and our first impulse was, ‘let’s get the folks on board who are the most excited about AI,’ because they need to be the ambassadors of this tool. We also need to include those folks [who] are very resistant to this technology… because we are going to be asking all of our case managers to buy into this.”
Including clients in the design of AI tools for nonprofit use has also been an important learning opportunity.
For Larkin Street, understanding client concerns has also been critical to their process of implementing new technologies across programs. Staff noted that they were intentional about including the youth they serve in their design process. In those focus groups, clients did not so much express concern over the involvement of AI in the organization’s service model, but rather they surfaced a broader lack of clarity on how their data were being used. This feedback not only helped inform Larkin Street as they developed their own case management AI tool, but it is also helping to improve other internal processes, like communicating data usage back to clients.
As Kevin Barenblat wrote, “AI is a powerful tool, but it’s only as effective as the policies that govern its use.”2 Nonprofits and philanthropic organizations across the country are continuing to explore and test ways in which this rapidly evolving technology can benefit their sectors and the populations with which they work. Continuing to surface lessons learned and promising practices can help inform efforts to ensure the ethical and effective deployment of these tools as organizations seek to improve outcomes and economic opportunities in the lower-income communities they serve.
This article is part of a Community Investments series exploring the ways in which the growing prevalence of artificial intelligence may be impacting economic conditions, especially in low- and moderate-income communities and among community development stakeholders. Gaining greater insight into emerging economic trends through community engagement and analysis—including better understanding the economic experiences of lower-income workers and consumers—contributes to the Federal Reserve Bank of San Francisco’s work to support monetary policy, strengthen financial institutions, and enhance the payments systems.
End Notes
1. Kevin Barenblat, “Why Nonprofits Need an AI Policy,” April 16, 2025. NonProfitPRO.
2.Ibid.

