AI’s New Criminal Toolkit — What Cops, Students, and Chiefs Need to Get Right Now
The UK’s AI Security Institute (AISI) just laid out a clean, no-drama picture of where AI-enabled crime is headed. The short version: the tools criminals need are already here, they’re getting better fast, and they lower the skill required to do serious harm. That should change how we teach, train, and police—now.

AISI highlights three capabilities that matter most on the street and in the classroom: (1) multimodal generation—convincing audio, video, and images that supercharge impersonation, sextortion, and social-engineering scams; (2) advanced planning and reasoning—models that help design and adapt attacks; and (3) AI agents—systems that can take actions on their own, enabling persistent, large-scale abuse without a human at the keyboard. If you’re still treating deepfakes and agentic automation as edge cases, you’re behind.
Equally important, these capabilities are moving onto consumer devices. As models are compressed and shipped inside everyday apps and phones, the barrier to entry drops and the attack surface expands. Translation for policing: expect more offenders using “ordinary” devices and off-the-shelf apps to run playbooks that used to require a crew.
On the research and policy side, AISI’s approach maps neatly to how a modern CJ program and a forward-leaning agency should respond. They’re modeling where AI gives criminals lift; running technical evaluations (including multimodal tests) to measure how much uplift models really provide; analyzing open-source usage data to spot misuse patterns and evasion; and red-teaming with subject-matter experts to surface attack paths we haven’t imagined yet. That’s the blueprint: study the problem, measure the risk, simulate the offender, then harden the system.
What that means for our world:
Training: Scenario-based drills on AI-assisted impersonation, sextortion, and automated social engineering. Teach officers and analysts what “AI fingerprints” look like in communications and media. (Stop assuming humans wrote the scam.) Policy & evidence: Update SOPs for AI-tainted evidence—document model/version, prompts, and chain-of-custody for generated or enhanced media. Encourage early consultation with prosecutors on admissibility. Ops & intel: Stand up a small red-team cell (cross-functional with IT) to probe local systems and public-facing workflows the way offenders will. Tie this to a quick-reaction playbook for takedowns and victim notification. Academia-to-practice pipeline: Put students on curated, legal datasets to replicate AISI-style evaluations and misuse detection; feed those results back to partner agencies as plain-language briefs. Build courses that blend forensics, policy, and offender decision-making—not just tool demos. Community comms: Pre-bunking beats debunking. Push simple guidance to the public on verifying voices/faces and handling “urgent” money or data requests before the next wave hits.
Bottom line: AI doesn’t just speed up old crimes; it changes who can commit them, at what scale, and how quickly they can pivot. If we align police practice with rigorous, transparent measurement—like AISI is doing—we can stay ahead without hype. That’s the kind of bridge between the academy and the profession worth building.
Source: https://www.aisi.gov.uk/work/how-will-ai-enable-the-crimes-of-the-future
More of my work: https://carterfsmith.com