Working Thoughtfully with Artificial Intelligence
Not that long ago, AI (or artificial intelligence) was an idea in the distant future. Now, AI is everywhere – from Google providing an AI summary of your search to bots like ChatGPT that mimic conversational responses. People can even create “deepfakes” using AI. These are AI altered images and videos that show things that never happened.
No matter where you turn, AI seems to offer solutions to all our problems. But just how much can AI help? What are the limitations of this technology?
Generally speaking, there are two types of AI – generative and predictive. Examples of generative AI include virtual assistants like some versions of Siri and Alexa or customer service chatbots. Predictive AI is used to forecast financial trends, stock prices, or other economic factors.
The Role of AI in Community Assessment and Grant Writing
For organizations that need to collect and analyze community data, both generative and predictive AI promise a faster, more efficient way to get things done. Sifting through mountains of publicly available data in order to make recommendations for service delivery is a daunting task. AI can review and analyze patterns in demographics and community resources. Researchers find that AI can also identify vulnerable populations and underserved areas, while making recommendations for targeted interventions.
Managing Expectations
As AI has become more popular, concerns about its limitations and challenges are on the rise. As one researcher noted, “AI models are only as good as the data they are trained on.” Data bias occurs when the data source is not representative, incomplete, or contains historical prejudices. AI doesn’t have critical thinking skills, so all data looks the same. If we rely on AI models, the resulting data bias could perpetuate existing social inequities. Another way of saying this is “garbage in, garbage out.”
Just being aware of data bias isn’t enough. Relying on AI models for decision-making carries other risks. The AI decision-making process lacks transparency. We don’t know how AI chooses what data to include or exclude. We don’t know how AI models are trained and how they arrive at their outcomes and recommendations. While a number of state regulators have issued guidance for AI transparency, there is currently no federal guidance. The end user is responsible for making sure that the AI they are using to gather and analyze data is clear and transparent - and free of bias. Is your organization up to that task?
Implications for Programs
While AI can automate tasks such as data gathering, it can’t replicate the role of humans in analytical reasoning, critical thinking, and decision-making.
To complete a well-informed community assessment or grant application, we all must apply a critical eye and a healthy dose of skepticism when using AI.
Ask yourself:
Where did the data come from and how accurate is it (data bias)?
How do the data and recommendations align with what I know to be true, based on my own experiences (transparency)?
What is the data telling me? What are the implications (critical thinking)?
How can I use the data to inform our service delivery (analytical reasoning and decision-making)?
While AI is a tool that can help you work smarter, ultimately the people in your organization are responsible for understanding the needs of your community and delivering services to support them.
If you have questions about how to integrate AI into your community assessment or grant writing, Foundations for Families is here to help. We continue to test out new tools and invite you to explore our Consulting Services. Please reach out to find out how we may be able to assist your agency or program.
Read our statement on artificial intelligence here.
Thank you.
Thank you for reading our blog. We encourage you to use our blog posts for thought, integration, and sharing. When using or sharing content from blog posts, please attribute the original content to Foundations for Families.