Social Media Tool Could Help Predict Suicide Risks
A new artificial intelligence algorithm is currently being designed to scour social media outlets for users contemplating suicide.
LiveScience reports that by monitoring content from Facebook, Twitter and LinkedIn, an initiative called the Durkheim Project is hoping to identify at-risk individuals and allow sooner intervention. The program was launched on July 2, and currently targets only combat veterans – a group perpetually plagued by a disproportionately high suicide rate.
"The study we've begun with our research partners will build a rich knowledge base that eventually could enable timely interventions by mental health professionals," said Chris Poulin, principal investigator on the project, in a statement. "Facebook's capability for outreach is unparalleled."
The current version of the program is designed to identify correlations between certain online behavior and self-harm. Veterans opt into the program and install an app that uses specialized algorithms to track and assess keywords, phrases and patterns.
At the moment, the project is in a passive, non-interventional research phase focusing on data gathering and algorithmic models. Unfortunately, this means that the patterns needed to identify at-risk users won’t be available until a few of them have already killed themselves.
Future versions of the app may then notify professionals or family members if a user’s web presence begins to exhibit suicidal activity patterns.
According to Poulin, the project is firmly grounded in previous research, particularly in a 2011 study that examined the social media presence of veterans. The study, which served as precursor to the current program, found that more than 65 percent of veterans who later committed suicide used keywords or phrases on a regular basis in their social media output.
However, some psychological factors may restrict the Durkheim Project’s ability intervene, even when the finalized algorithms are in place. For privacy reasons, the program must be installed by the users themselves, which means that the program only surveys data within a set of individuals who already consider themselves to be at risk – and who, paradoxically, have already taken preventive measures by installing the app. The program thus resembles a passive “safety net” rather than an active interventional tool.
In addition, the leap from mere correlation to hard causality is exceedingly difficult, as the psychological realm the program is designed to probe is both complex and counterintuitive. It is not simply a matter of tracing “negative” output patterns, as a string of depressed tweets or Facebook statuses doesn’t necessarily mean that the user is going to off himself. Instead, the program must grapple with broad, interpersonal patterns of high complexity.