Atualizado: 7th Sep 2020 Leitura: 2 minutos

Racist British AI Shut Down For Inaccuracy

Artificial intelligence and machine learning are all the rage right now. Its no surprise that law enforcement has been interested in using these emerging technologies. In combination with facial recognition, they can find suspects in criminal investigations, or prevent crimes before they happen.

The most recent crack at the latter has been, predictably, an absolute failure.

Since 2018, reports began to surface about a new AI crime prevention system being developed by the UK government called the National Data Analytics Solution (NDAS). It was funded with over £10 million and was supposed to determine ahead of time if someone is likely to commit a crime.

More specifically, Most Serious Violence (MSV), part of the NDAS initiative, assessed different factors like past run-ins with police to provide each person with a score that determined whether theyre likely to commit a violent act with a gun or a knife within the next two years. If so, theyd be sent to therapy.

As they began rolling out the system earlier this year, they realized there was a coding flaw that made it incapable of predicting what its supposed to “with any degree of precision.”

Upon fixing the coding flaw, the accuracy reportedly fell to between 14 and 19%. Thats a pretty sharp drop from the previously claimed 75% accuracy. Accuracy wasnt even affected by whether the person had priors.

Despite reworking the system, the best accuracy rates achieved under ideal circumstances were 51%. Thats no more accurate than flipping a coin.

Worse than the accuracy is the bias. The system considered factors like age, days since their first crime, how severe their crimes have been, their connections to other criminals and how often they mention knives in their communications.

Despite not including location and ethnicity data, the system still used many factors that are affected by location and ethnicity, like proximity to criminals and whether theyve had run-ins with police, which made the scores unreliable.

The ethics committee ended up unanimously concluding that development of the system would stop and none of the current implementations would be moving forward.

Link copied to clipboard

Entre em contato