By Tricia Gueulette
I recently moved into a new home with my partner, Patrick. Moving into our new home was exciting, and Patrick was enthusiastic about making it a smart home. I was curious to explore the possibilities, so we dove in and connected as many things as possible to Google Home. It’s been an adjustment! I’m still getting used to talking to the house to turn on lights and play music. Sometimes I feel a little silly, but it’s also fun. The biggest challenge for me has been remembering the exact names for everything. Is it the ‘table lamp’ or the ‘living room lamp’? I’m constantly getting them mixed up, which can lead to some funny (and occasionally frustrating) moments as I learn the new vocabulary of my house.
That being said, AI has come a long way since the advent of Google Home (which is about to become a lot smarter with the integration of Google’s AI assistant, Gemini). I came home the other day, and I could hear Patrick talking to someone in his office. The conversation was deeply philosophical, and I wondered who had stopped by. It turns out Patrick was having a full conversation with Gemini. What a world we have entered!
So how can non-profit community organizations like Beacon use AI? It’s clear that AI is rapidly changing how we operate with research showing non-profits are using AI applications to find operational efficiencies, provide engagement tracking and predictive analytics, enhance service delivery and personalize support. AI is making services more accessible, providing data analysis and insights, and is even helping to coordinate relief efforts during disasters. There are, of course, ethical considerations, particularly regarding data privacy, algorithmic bias, and misinformation. Beacon has recently developed our first organizational AI policy, and it has become very clear that not only do we need clear governance policies and frameworks as to how we will use AI within our organization, but we also need to proceed with caution and address the ethical challenges involved. We know staff are using AI in their work for various reasons, so it’s become imperative that we embrace AI instead of hiding from it and ensure the way it’s being used is ethical and safe for our clients and Beacon.
At Beacon, we have been discovering AI tools built into a lot of the software we have recently purchased. One of our latest purchases, “Prophix” financial software, is a prime example. It utilizes AI to provide automated insights, generate report commentary, and assist with tasks like scheduling and distribution, effectively integrating AI into its financial performance management functionalities. The insights examine variances, trends, and other key metrics, and our forecasting has never been more accurate. Our new accounts payable software uses AI to automatically extract key information from uploaded or emailed invoices like vendor names, dates, amounts and line items, significantly reducing manual data entry and improving processing speed. And it continues to learn the more you use it!
New examples are coming out every day of how AI is being used in providing social services. “MindSuite” analyzes client data to identify potential risks. “Iris AI” helps gather relevant information from research databases to support decision-making. AI-FEED leverages advancements in artificial intelligence and blockchain technology to facilitate improved access to nutritious food and efficient resource allocation in food banks, aiming to reduce food waste and bolster community health. In Los Angeles County, a unique pilot program is using predictive analytics to help prevent homelessness. Researchers used anonymized data on almost 100,000 people who use health and mental health services provided by eight community agencies to develop a computer model that identifies who is at the highest risk of becoming homeless. Those individuals identified by the algorithm are then connected to programs and resources that can address their needs, and in doing so, help ensure that they do not experience homelessness.
So how can we ensure that community organizations like Beacon continue to leverage the innovations that come with AI while ensuring we remain ethical in our approach? Beacon is taking a measured approach to implementing AI. Our AI strategy includes building teams to ensure we have robust data governance, ensuring quality and integrity with clear policies for data collection, storage and usage. We are conducting regular audits and assessments for AI systems we have launched for ethical compliance. We are fostering education and awareness among our staff about the ethical implications of AI. We are prioritizing the safety and security of AI systems through risk assessments and implementation of safeguards. And above all, we are taking a human-centred approach. AI should be used to augment human capabilities, not replace them. Community service relies on human connection and empathy, which AI cannot replicate.
As AI technology continues to evolve, its potential for non-profits is only just beginning to be realized. Organizations like ours are embracing innovation and are leveraging the power of AI, paving the way for a more efficient, effective, and impactful non-profit sector. By strategically integrating AI into our operations, we can amplify our reach, deepen our impact, and ultimately create a better future for the communities we serve.