Skip to content

The AI Gap: About Criminals And Law Enforcement - One Innovates, One Waits

Software Development and Coding has never been easier than now because of AI. New LLMs enable us to build customized solutions faster than ever before. But what are the possibilities for law enforcement agencies, and how can they leverage AI to create specialized, efficient tools?

LLMs are already capable of producing substantial amounts of code. Major companies like Google, Facebook, Anthropic, and OpenAI use AI models to develop their products further. Mark Zuckerberg predicted in an interview that

within the next 12 to 18 months about 50 % of coding will be done by AI.

In some of Microsoft’s projects, 20–30% of code is already generated by AI. For these companies, it’s a competition for the most powerful AI models.

And how about Criminals and Law Enforcement?

More and more criminals use AI for their criminal activities (fraud with live voice generation, CEO frauds with deepfakes, automated cyber-attacks, creating SaaS distributed on Darknet marketplaces, etc.).

On the other hand, law enforcement agencies still rely largely on traditional approaches. When they realize they need new software (which could take an eternity given the bureaucratic circumstances), they mostly hire external companies to build it for large sums of money. The result is often a system that still fails to cover the entire workflow, is overly complex, or is not at all user-friendly. (But it’s GDPR compliant! 🤓)

AI offers a bridge: It can help create reliable software components that support workflows existing systems simply cannot handle.

What could be / is already a solution?

The best part is that open-source and open-weight models can run on local hardware, reducing the risks of data leakage. The quality of more recent open-source models is more than acceptable for most use cases. (Here the last time I will mention it.. promise: Of course, when dealing with sensitive information, GDPR compliance is essential.)

Within the EU, one company has become difficult to ignore: Mistral AI, with its open-weight model ecosystem. In addition to their large models (which usually require hardware budgets only a few public institutions could justify), they recently introduced their Ministral 3 models and, two days ago, their Devstral 2 model.

I conducted several performance tests, and I am impressed by Ministral 3’s vision capabilities (even with the smallest 3B model in a Q4KM quantized version). Ministral 3 has already proven useful in a new software solution I’m currently building, as it delivers solid performance and a high token output even on lower hardware configurations.

What does this mean for law enforcement agencies in practice?

It means that agencies now have the opportunity to develop tailored digital tools in-house, without depending on multi-year procurement cycles or expensive external partners. Investigators, analysts, and officers could already work with lightweight, specialized AI models that assist with

  • Document and media analysis → AI vision and transcription models can process case files, images, and videos within seconds (on secure local machines).
  • Automating repetitive tasks → From generating standardized reports to converting data formats, AI can take over time-consuming administrative work.
  • Search and analysis across multiple datasets → Instead of manually comparing information across various internal systems, AI can cross-reference and highlight relevant connections automatically.
  • Prototyping new workflow tools → Officers with basic technical understanding can now create small applications or workflow improvements themselves. AI could even generate a comprehensive instruction manual on how to install and use the new tool.

Of course, the process of implementing these tools for everyone (even after security audits, compliance checks, and careful configuration) is still difficult from within the agency bureaucracy.

Closing The Gap

A shift would allow law enforcement to become more agile, efficient, and technologically independent. For some use cases, AI might replace humans in the future. And within law enforcement, this could free up human resources to

focus on what is most important: catching criminals.

Unless we see the first working and approved RoboCops, we still depend on humans to catch suspects.

Over the next few years, agencies that invest some of their resources in AI and small-scale innovation will gain a massive advantage. I highly recommend every colleague share their innovations and tools with Europol and its Innovation Lab. They can help you distribute your software on the Europol Platform for Experts (EPE). Every new tool helps all members of law enforcement fight criminals worldwide.

(And because it would be approved by Europol, it could simplify adoption within your agency. 😉)

Don’t Give Up

I’m excited to continue exploring how these technologies can support our daily work. The progress we’ve seen in just the past few months is only the beginning. The potential for law enforcement is enormous.

But always remember, that both sides benefit from the massive progress in technology.

For everyone reading to the end of this article:

Mark Zuckerberg’s prediction was in May 2025. About a year from now, it’s possible that 50% of all code will be written by AI. See you in December 2026 for an update about his prediction (let’s see if he was right 😊).

Stay connected

#LawEnforcement#TechInnovation#MistralAI#SoftwareDevelopment#Europol#InnovationLab#EPE
Back
Next Post Conversations About AI: Policing, Privacy, and the Future We're Ignoring