Far too many children die from abuse and neglect in the United States ̶ around 1600 annually according to the National Child Abuse and Neglect Data System.
Many states and communities have worked diligently to reduce those numbers with varying degrees of success. Believing in the power and efficiency technology can bring to the solution, child welfare agencies across the country are quickening the pace at which they eagerly embrace “big data” to strategically help them make critical decisions and allocate resources where they are most needed.
This new frontier in child welfare, like most uncharted territory, is not without its critics.
Finding More Needles in the Haystack
Similar to the concept of hot-spotting in the healthcare industry, predictive analytics in child welfare uses data mining and statistical analysis to project which children known to the child welfare system are at highest risk. Attention can then be focused on changing the trajectory for these children and families through supports and interventions.
The goal is to address the most complex situations and produce better outcomes. This change marshals a more public health oriented approach to child safety by emphasizing prevention, rather than simply reacting after severe neglect, abuse or fatalities have already occurred.
Look Before Leaping
Overall, shifting the approach in child welfare from reacting to tragedies to preventing them seems to be a well-intentioned strategy. More informed decision-making based on better data could help overburdened child protection agencies identify the most at-risk children and utilize limited resources effectively.
The promise of analyzing real-time data to notify child welfare professionals of warning signs that might otherwise be missed is compelling. The possibility that agencies can place limited resources where they are utilized most effectively seems prudent and smart.
Yet, critics remain mostly unconvinced and warn of numerous potential problems including flooding the system with false positives, increased risks for abuse, stigmatization of families scored as high-risk and even built-in biases against minorities.
Proponents of predictive analytics mostly minimize those concerns, reminding us that predictive analytics are tools to enhance, not supplant, critical thinking of child welfare staff, often through additional sets of unbiased eyes.
Other concerns are harder to wave off, particularly the potential confidentiality issues raised by generating risk projections based on data gathered for purposes other than child protection, which are then also assessed without knowledge and consent.
Much like the real-time data on which these systems base their predictions, the concerns presented by using such data are being addressed in real-time as well.
That is perhaps most concerning of all.
Doing Well by Doing Good
Child welfare predictive analytics have also been a bonanza for private companies seeking to enter new markets.
In Florida’s Hillsborough County, for example, a form of child welfare predictive analytics developed by Mindshare and Eckerd Kids called “Rapid Safety Feedback” has been in use for just five years. Yet, it’s already reportedly spread to nine other states. This despite continued concerns by some about the use of predictive analytics in child welfare in general, and regardless that Eckerd officials have admitted that Rapid Safety Feedback by itself may not have anything to do with reducing child deaths.
“I never try to claim causality,” Eckerd’s Bryan Lindert has been quoted in news articles.
But others are actively courting predictive analytics contracts in the social services industry.
The SAS Institute, considered the world’s largest private software firm, has also set its sights on the child welfare predictive analytics market. For Los Angeles, SAS developed a child welfare predictive analytics tool called AURA that crunches data from a wide variety of law enforcement, mental health, child welfare and other sources.
Included in a National Child Safety Strategy
Even the bipartisan National Commission to Eliminate Child Abuse and Neglect Fatalities (CECANF), which was tasked with developing a national child safety strategy, recommended states make better use of real-time data for child protection through predictive analytics.
The CECANF report, however, was subject to scathing criticism by no fewer than two of its commissioners. In her published dissenting comments, Judge Patricia Martin appeared to assert that CECANF had a pre-determined agenda and ignored much of the testimony from experts and some of its own commissioners in its “unorthodox editing process.”
She continued to rail about the “inherent limitations” of predictive analytics and quoted expert testimony from Emily Hornestein, who believes there will never be enough data to support empirically that predictive analytics prevents child fatalities.
Advice from an Old Adage
With any new technology, it takes time and consistent effort to iron out the kinks. Public agencies would be wise to fully analyze their capabilities for adapting to such changes, build the solid data infrastructure for predictive analytics to work and ensure the right custom solution is developed to solve the unique needs of their communities.
Too often in the rush to improve child welfare, new approaches take hold prior to rigorous scientific research and validation.
Or put another way, “Garbage in, garbage out” still rings true today.
If you like reading our publications, we need your help to keep them coming. The Children’s Campaign accepts NO government funding, and therefore donations from readers like you are necessary in order to retain our independent voice. Please consider making a tax-deductible donation today!
This Top Story is brought to you by Roy Miller, Karen Bonsignori and Tiffany McGlinchey