How Will Law Enforcement Use AI For Crime Prevention in 2021?

How Will Law Enforcement Use AI For Crime Prevention in 2021?

The use of artificial intelligence (AI) has come a long way since the 1950s when the concept was first conjured by Alan Turing, and then put into action by Allen Newell and Herbert A. Simon, who created the first AI program known as "Logic Theorist."

Today, AI applications can be found everywhere. While consumers may assume that these use-cases are limited to personal assistants like Siri and Cortana, AI is heavily used in the B2B and enterprise world. For example, financial institutions are using AI to fight money laundering, media companies use it to upscale video footage, and law enforcement agencies are integrating AI to help solve cases faster.

It's worth noting that while the global pandemic and the shift to remote work has accelerated the adoption of AI across industries, 2021 will likely be the year we see widespread adoption of AI. This is especially true in the modern office setting, as we finally have cost-effective solutions that can simplify repetitive tasks so staff can focus on work that requires more thought and expertise. We're also likely to see AI adopted more universally in the law enforcement field.

Some law enforcement agencies have already been using AI as a means to solve cases faster, helping to put criminals in jail and acquitting innocent parties. When many hear "AI and law enforcement," however, they're quick to assume we're talking about facial recognition tools, but that's only one application of AI (where lawful) that investigators have at their disposal. Let's look ahead to see how law enforcement will increasingly begin to adopt a wide-range of AI solutions to improve policing in the future.

AI Will Make Criminal Investigations More Manageable 

One of the greatest challenges law enforcement agencies face today is the sheer volume of data involved in each investigation, which causes mounting backlogs and slows down investigative work. Investigators need the ability to analyze these digital assets to discover relevant information pertaining to each case, and potential evidence admissible in court. Not only is this caused by the sheer volume of data that must be analyzed, (think about the max storage on a computer just 20 years ago vs. today, and you can understand the issue) but also because of the variety of sources from which digital evidence is collected. Obviously, computers and mobile phones are crucial providers of evidence that must be analyzed, but even items such as wearables and IoT devices need to be taken into consideration.

To combat the struggle with data overload, next year we expect to see more agencies adopt AI- and machine-learning-based tools that can automatically analyze and interpret data from video, sensors and even biometrics that are court admissible. These solutions use AI to make connections between data points that could otherwise be missed by human investigators, or would simply take too long to discover. This proves to be a crucial differentiator in urgent cases like kidnapping, where a victim needs to be found and brought to safety as soon as possible.

These new tools will allow for cases to be effectively run through the use of Digital Intelligence, or data that is captured from digital sources, such as smartphones, computers, or the Cloud, and the process by which agencies conduct an investigative analysis and leverage that data to efficiently run their operations.

An added benefit to this AI-based analysis is that it helps minimize the amount of time investigators must spend reviewing graphic images, which can be damaging to individuals over time. Searches powered by AI can be customized to automatically isolate images from photo and video sources that relate to specific crimes. Weapons, trafficking, and child abuse can all be identified without subjecting an investigator to graphic footage. All of this analysis can be performed legally and compliantly so it's admissible in court.

AI Becomes Ethical

It's no secret that the public has a negative perception of AI-based tools, as they fear this type of technology has the potential to be invasive of their privacy or can be racially biased. While this is an understandable concern, it's important to understand AI tech is not used by law enforcement to infringe on people's privacy, as it only uses selected bits of information to solve crimes in order to keep the communities they serve safer.

However, that's not to say law enforcement agencies don't recognize there's a communication gap between the law and the public on this issue. They do, and as more agencies begin to adopt these AI- and ML-based solutions, there's an onus on law enforcement to abide by ethical policies and to remove bias and negative perception in such tools.

As such, departments will begin to adhere to new, established policies and work with governing bodies on responsible and ethical AI usage, including proper training for the relevant teams and business functions, as well as creating an environment with an ethos of data-driven and responsible decision-making.

Going a step further, law enforcement organizations will continue to ensure AI systems are vetted to be bias-free and corrected as needed, and we'll begin to see more lines of communication open up with the public to promote transparency regarding the use of these tools.

Ethical Data Privacy Accelerating Justice 

Another hot button topic is the use of AI for predictive policing techniques, concepts we've seen in films like Minority Report that are now coming to fruition. While widespread adoption is still in its infancy, AI and machine learning will continue to be used by police departments to determine hotspots for crime, enabling them to deploy officers to these locations preemptively.

These determinations are completely data-driven, and factors like race, religion, or sex do not play a role. In fact, these tools can predict who is more likely to be a victim of a particular crime and departments can act accordingly to protect them.

It's worth restating that law enforcement agencies aren't using AI tools to invade our privacy or for any other malicious purpose. The sole intent of these tools is to create safer communities and reduce the time it takes to solve crimes using Digital Intelligence.

AI has certainly become a growing part of our lives; not only do we have our own voice-enabled assistants in our pockets, but the technology is also becoming a crucial facet for many industries, including law enforcement. Looking ahead to 2021 and the years to come, it's clear that AI and investigative analytics will continue to play a key role in helping police departments keep our communities safe.

Police will work with the community to maintain an open line of dialogue to ensure citizens know that these tools aren't malicious, invasive or bias-prone, and once the stigma around the tech starts to dissipate, the public will begin to embrace just how helpful and necessary these tools are.

If you're interested in learning more about the future of AI and policing, I suggest reading this recently released white paper from IDC.

About the Author:

Heather Mahalik is the Senior Director of Digital Intelligence at Cellebrite. She advises on strategic digital intelligence operations and educates both the public and industry professionals on the latest challenges in the space and how Cellebrite helps address them. For more than 18 years, Heather has worked on high-stress and high-profile cases, investigating everything from child exploitation to Osama Bin Laden's digital media. She has helped law enforcement, eDiscovery firms, and the federal government extract and manually decode artifacts used in solving investigations around the world. Heather is the co-author of Practical Mobile Forensics, currently a best seller from Pack't Publishing and serves as a senior instructor, author and the course lead for FOR585: Smartphone Forensic Analysis In-Depth at the SANS Institute.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net