Chapin Hall Explores How AI can Improve Child and Family Well-beingĀ 

Read Chapin Hall’s AI Guidelines

Innovation and rigor are cornerstone capacities at Chapin Hall. While new technologies are evolving, our experts explore how they can be applied to solve public policy problems and improve child and family well-being.Ā 

Before artificial intelligence (AI) became commonly used, Chapin Hall was exploring the potential utility of predictive analytics and machine learning. In this effort, we leveraged Chapin Hall’s decades of stewardship of and expertise in managing public agency administrative data. In a 2021 study, for example, Chapin Hall Research Fellow Dr. Brian Chor and Researcher Zhidi Luo studied how interpretable machine learning could be applied to predict likelihood of youth running away from their foster care placement. Understanding the predictors that increased or mitigated the risk of running away can help child welfare agencies improve placement decisions and prevent placement disruption to better protect the youth in their care.Ā 

Soon after, awareness of applied AI expanded exponentially with the public release of the large language model ChatGPT in 2022. Other AI apps for data analysis and visualizations followed. With this broader access, Chapin Hall understood the importance of setting clear standards for using AI—standards that ensure high levels of rigor, privacy, and ethics to protect and advance the social good of children and families through our work. We also needed to be able to advise our partners on using these AI tools.Ā Ā 

AI Working Group & GuidelinesĀ 

Chapin Hall formed an AI working group made up of staff from across the organization and co-led by Alex Cohn, Chapin Hall’s Chief Information Security Officer and Director of Research Technology, and Dr. Karikarn (Kay) Chansiri, a researcher who is focused on machine learning and AI. This workgroup was charged with tracking AI advances, establishing best practice principles, and applying them to Chapin Hall’s work. Together, this group developed AI guidelines to ensure the ethical and responsible use of AI tools, protect the sensitive data we work with, and contribute positively to AI-informed decision-making processes that affect children and families. We regularly review and update these guidelines, which have been used as a model for dozens of organizations and can be downloaded below.Ā Ā 

Chapin Hall’s AI working group has also met with preeminent AI experts in this field, including Oki Mek of Microsoft, Dr. Pat Pataranutaporn of MIT, James Donovan of Open AI Moonshots, and Sandip Trivedi of Quantierra. These AI leaders have helped us extend our AI collaboration and scope, shape our AI practices, inform our AI work with partners, and keep our AI guidelines current.Ā 

Below are recent examples of Chapin Hall’s work that applies predictive analytics, machine learning, and AI methods:Ā 

Using Predictive Analytics to Improve Child Welfare Decision-MakingĀ 

Chor’s research team published the development and validation of a proof-of-concept predictive model to help state child welfare agencies implement the residential care provisions of the Family First Prevention Services Act (Family First; P. L. 115-123), which requires a higher bar for states to claim federal fiscal support for youth’s needs-based, short-term treatment in residential care. To help child welfare agencies more accurately and preventively identify residential treatment needs, we have developed a model that predicts the risk of a youth entering the child welfare system being placed in residential care. This helps agencies identify risks within the first 90 days, which coincides with the timeframe for assessments and court reviews required by Family First. When caseworkers can better predict and preempt the need for residential care, they can match families to appropriate services earlier. Using these predictive analytic approaches, Chor’s team also published a study on predicting the risk of a youth running away from foster care in the first 90 days of care, which showed similarly promising results that could benefit caseworker decision-making.Ā Ā 

Another challenge child welfare agencies face is staff burnout that leads to high turnover. Using state-of-the-art Natural Language Processing methods, Chansiri and team analyzed voluntary, open-ended survey comments from child welfare workers in one state child welfare agency. The team identified negative emotional trends, workers’ need for assistance in work management, and trauma associated with their work. These themes will not only advance Chapin Hall’s understanding of child welfare professionals’ challenges but will also contribute to the development of more tailored evidence-informed interventions to support the workforce.Ā Ā 

Addressing Gender Bias in Large Language ModelsĀ Ā 

In Chapin Hall’s commitment to advancing racial and gender equity, we use rigorous social science methods to identify where bias intersects with AI, and how bias can be perpetuated. In one project presented at an IEEE conference, Chansiri identified and addressed biases in AI models in metal health diagnoses. Specifically, she looked at gender bias in large language models and how it may perpetuate stereotypes related to the over-diagnosis of women with Borderline Personality Disorder and men with Narcissistic Personality Disorder.Ā 

The goal of this work is to build the evidence base for creating more fair and inclusive AI diagnostic tools that can positively impact diverse populations. The findings of this study are published in the Proceedings of the 2024 International Conference on Big Data Analytics and Practice in IEEE Explore and will be posted on the Chapin Hall website when the reports are complete.Ā Ā 

Using AI to Expand DisseminationĀ 

Chapin Hall is committed to a robust dissemination process that ensures our research is applied in the human services field. Good research dissemination is a process over time that thoughtfully communicates to people who can best use and apply the research in a way that can improve outcomes for families. This process involves developing and delivering messages repeatedly through many channels. It is labor intensive.Ā 

The communication team at Chapin Hall, working with other colleagues across the field of research communication, is actively applying large language models to expand our dissemination tactics. Using only our own reports as input, large language models help us summarize lengthy research reports and translate technical descriptions into plain language. They also help us quickly produce first drafts of outreach materials for specific audiences.Ā Ā 

Chapin Hall is committed to exploring how AI can enhance service delivery to children and families and can expedite the dissemination of evidence-based practices. For more information about this work, follow the links on this page, or contact Alex Cohn.