Conversations About AI: Policing, Privacy, and the Future We're Ignoring
A colleague asked me how I imagine policing in five years and how we will be able to use AI. I gave him an honest answer:
Anything is possible.
Automated workflows, investigative support, data analysis, solving cold cases (because AI notices the one small detail no investigator saw before). All of that is already within reach, and my team is developing towards “the new era of criminal investigations.”
But we are talking about a small unit with a very limited budget.
So if you ask me whether our federal state is developing in the right direction: maybe… But far too slowly and definitely not quickly enough to keep up with the rapidly growing amount of cybercrime. This may sound like strong criticism, but it reflects my current view of reality.
For large-scale use across the state, we simply don’t invest enough financial resources to make use of modern AI models that could help us close the gap to today’s criminals.
We need a fundamental shift in thinking about that. Otherwise, investigations will continue to struggle with massive data volumes and a lack of automation. Even now, many investigators are overwhelmed by the data they receive from providers because they can’t read or interpret it efficiently. AI can help not only in understanding data, but also in interpreting it correctly. And of course, all of this must be fully GDPR-compliant and run locally.
If you’re working in a similar field, I’d be interested to hear how your teams are handling these challenges.
The second situation involved a family member who wanted to use ChatGPT for the first time. We set everything up, I showed him a few things, and then he asked: “So where are the data stored?” - A very good question!
I told him that the data are processed by OpenAI, the provider of ChatGPT. They have access to the data you use within ChatGPT (and maybe even more, who knows?). His reaction was:
“Well, it doesn’t matter. They [meaning companies like Apple, Google, Amazon, etc.] already know everything about me anyway.”
I paused for a moment and kept my thoughts to myself. The same person would, of course, anonymize data coming from a professional context but doesn’t care about his personal data?
This shows that there is still limited understanding of how important it is to protect your own data or how important it is to educate yourself about this topic.
To all the people who already see the enormous positive and negative potential of AI: please educate yourselves, and inform your friends, family, and colleagues when there is a right moment to do so.
Many of my contacts still dismiss AI as just another trend. But many jobs will not be secure in a few years, and some will disappear entirely due to AI and robotics. Last time we saw that on a big scale during the time of industrialization.
This needs a broader discussion in society. What happens to the people who currently work in those roles? How will they be supported if they lose their position to a robot or an advanced LLM? What happens if millions lose their job in a few years?
Customer service can already be handled live by intelligent TTS models (text-to-speech). Only one human is needed, the IT person who sets everything up.
The same applies to fraud attempts, which can now run automatically on a massive scale. How do we protect ourselves? Do we need AI to defend against AI? Should everyone have their own assistant that accompanies them through daily life, that has access to all their private information, and warns them of possible risks? And if yes, who controls this personal AI assistant?
We are facing a major societal shift, and far too many people still don’t take it seriously.