Law enforcement authorities around the world want to predict violent crimes before they happen, they are working on a project that combines the use of artificial intelligence (AI) with statistics that will try to measure the risk of someone committing or being the victim of a violent crime.
As reported by NewScientist, the system in the UK is called the "National Data Analysis Solution" (NDAS), and the concept lies on the idea of being able to mark people before a crime happens, in order to offer "interventions" that manage to avoid “criminal behavior”.
The system obtains information from national and local police databases -so far they have collected more than one terabyte of data - including reports of crimes committed and about 5 million identifiable people.
NDAS has 1,400 indicators that can help you predict whether someone will commit a crime, parameters such as how many times someone has committed a crime in the past, or how many people close to you have committed crimes. Those who are marked by the system as prone to violent acts will receive a "risk score".
If all this sounds like the movie Minority Report argument to you, it is because it is quite similar in concept. What is still under discussion is exactly what will happen when these high-risk individuals committing a crime are identified.
Iain Donnelly, the project's lead police officer, explained that the intention is not to arrest anyone in advance, but instead to offer support through local health and social workers.
For example, they could offer advice to an individual with a history of mental health problems if the system marks them with a high probability of committing a violent crime, and potential victims could also be contacted by social services.
The West Midlands Police Department, UK, is in charge of the test project and they are developing the same on day-to-day basis considering the ethical and privacy concerns involved, although there are already eight other police departments involved, and they hope to expand across the UK.
The ethical issues and biases of AI
If we know anything, it is that the algorithms that control our lives are as unfair as ourselves, and artificial intelligence is prejudiced because they learn from the data that is injected into the algorithms, and the data of the world in which we live is certainly full of prejudices.
This system will be overseen by the Information Commissioner's Office, the UK data protection agency that will ensure that NDAS complies with privacy regulations. However, the project has already been criticized for its ethical problems.
A team at the Alan Turing Institute in London questioned whether it is useful for the public welfare to intervene when an individual has not even committed a crime just because they are likely to do so in the future. Although the show has good intentions it seems to ignore the problems of inaccurate predictions.
Then there is the fact that basing predictions on past arrest histories runs the risk of limiting predictions to certain places and reinforcing system biases. In many cases, arrests are correlated with where the police are deployed and not with crime, which tends to disproportionately affect people of color and residents of poor neighborhoods.
Exciting times to sit and watch the development of emerging technologies.
The article has been written by Dr. Raul V. Rodriguez, Dean, Woxsen School of Business
He can be reached on LinkedIn.