Crisis Text Line counsels every individual who dials in, texts, or messages the organization by connecting the individual to a counsellor who listens to the problems the person is facing and helps find a solution to the problem. However, the organization does not accord priority to the cases by the order in which the text/call/message is received, rather by the gravity of the problem the individual is facing.
Identify high-risk terms
This is where Crisis Text Line leverages the power of artificial intelligence. The organization uses artificial intelligence algorithms to identify individuals—especially those contemplating suicide—who need attention on the most urgent basis. It harnesses data to look for words or terms or emojis that point towards suicidal thoughts and immediately connect these individuals to counsellors to prevent loss of lives due to suicides.
Pill emoji more dangerous than word ‘suicide’
The organization says it has analyzed 129 million messages received between 2013 and 2019 and find startling words and expressions that are far more worrisome than the word suicide. The organization has identified more than a hundred terms that must trigger immediate counselling. For instance, individuals using pill emoji are 4.4 times more inclined to commit suicide than those individuals who use the word suicide. Another emoji—bitterly crying emoji—denotes high risk and need for immediate attention. Similarly, the organization has found that individuals using words like 800mg, acetaminophen, Excedrin, and antifreeze are two to three times more suicidal than those using the word suicide.
AI helps read between the lines
In an attempt to correctly read between the lines and comprehend the underlying meaning of an individual’s message, the organization set out to identify the content that could point to an imminent suicide. In 2015, the organization collated 50 words from the experts that were considered high-risk and then used technology to analyze the messages it received in an attempt to validate these words. Surprisingly, the organization found that the use of ‘ibuprofen’ indicated a sixteen fold danger than the word suicide.
Red-flags to prioritize assistance
When artificial powered algorithms find high-risk words, they flag them for the immediate attention of the counsellors who immediately swing into action. With further improvements in technology, assessing suicide risks will become sharper and facilitate extending help to those in need.
By Neetu Katyal, Content and Marketing Consultant
She can be reached on LinkedIn.