Neuroprophet in the service of security: predicting criminals with AI

Published: 2024-10-27
Author: Artem Rogozhin

The city drowns in neon lights, like the pages of a dystopian novel where every human step has long been calculated by the system. This isn’t the beginning of “Snowden” or the plot of a new spy saga; it’s the future we all will have to live in.

In 2024, scientists in South Korea created a system called Dejaview, which functions as an artificial intelligence capable of recognizing criminal intent before it manifests. The neural network was trained on 33,000 recordings collected from surveillance cameras over recent years. Time of day, location, and past incidents were carefully analysed so that the highly intelligent neural network could detect potential threats.

Neuroprophet Dejaview studies people’s behaviour and finds “déjà vu,” those very behavioural patterns that have already led to crimes. Dejaview watches a person like an old paranoiac who’s read too many detective novels: the habitual actions of repeat offenders serve as a foundation for the system to predict when the next violator will follow a familiar script.

All of this resembles the classic film Minority Report with Tom Cruise, directed by Steven Spielberg, where the “Precrime” police arrested criminals before they committed a crime, using the paranormal abilities of “Precogs.” Only in the case of the neuroprophet, there’s no mysticism — just data and code.

“We only arrest those who haven’t broken the law but will. After all, preventing a crime doesn’t change the fact that it was going to be committed,” says the character in Minority Report. Well…

The new intelligent prophet, Dejaview, was trained for a long time. Machines watched thousands of hours of video, analysing every human movement down to the smallest details: they tracked those who lingered at corners, those who looked around more than usual, and those who ventured into areas at night where law-abiding citizens rarely go. Step by step, Dejaview learned to be not just an observer but an arbiter of fate.

«… The crime coefficient is over 899 points. Enforcement mode is Lethal Eliminator»
Psycho-pass

“…Big Brother is watching you”
Film “1984”

In the popular anime Psycho-Pass, machines assessed a person’s mental state and determined who could become a criminal. If the machine embedded in a gun rated someone’s crime risk above 200-300 points based on their displayed emotions, that person would simply be eliminated.

Dejaview is the embodiment of that system, but instead of a crime coefficient, it relies on algorithms and endless arrays of data.

On one hand, the Neuroprophet capable of preventing crimes is an incredible technological breakthrough: the likelihood of committing a crime decreases, and society becomes safer. But behind this achievement lies a grim truth — such safety has become synonymous with a new danger. People are beginning to be governed by the fear of free movement, as any deviation from the norm can put them under suspicion: the neuroprophet that protects us from criminals could ultimately turn anyone into a suspect. This is the fear that looms over the progress of the 21st century — the fear that ordinary actions could be interpreted as dangerous by an artificial superintelligence.

In George Orwell’s 1984, every individual was under surveillance, and any thought could become grounds for arrest. It is quite possible that in the era of Dejaview, we will live in a world of “Big Brother,” just like in Orwell’s dystopia, where every step we take is monitored. Only instead of cameras, we will be watched by neural networks.

“Not committing a crime” does not mean “not guilty”

Once, the joy of implementing artificial intelligence was widespread. Neural networks took on routine tasks, leaving people with more time for creativity and personal growth. But gradually, we began to notice that while machines solved our problems, they were taking something far more valuable — our freedom.

In Asia, artificial intelligence is often portrayed as the savior of humanity: in anime and films, it brings prosperity and protects the weak. In American culture, AI is more commonly seen as a threat: in movies like Terminator and The Matrix, machines annihilate humanity.

During the time these films were scripted, people feared being ambushed by bandits around every corner when they stepped outside. Now, the new neuroprophet paints entirely different scenarios: a person walks down the street, browsing shop windows and signs in search of a life-saving word: “Coffee.” A routine event, but not for Dejaview. The artificial intelligence acts like a security guard at the entrance, observing when a person’s actions become atypical.

The neuroprophet never makes mistakes

Artificial intelligence has prevented numerous crimes, allowing people to walk peacefully at night without fearing for their lives. But with the arrival of the new neuroprophet, people have begun to fear for their freedom, as we now live in a world where every step is recorded and every movement can raise suspicion.

Neural networks have given us protection, but how will all the predictions of the neuroprophet be regulated at a legal level? Who will be held accountable for providing false information, and how long will data about a specific person be retained?

These are questions we all still need answers to.

“…The real problem is not whether machines think but whether men do.”
Burrhus Frederic Skinner

Now, hiding online and leaving just a few scathing comments is becoming increasingly difficult. Elon Musk has introduced the world to his intelligent system for tracking internet trolls. The AI Sherlock identifies trolls even before they have a chance to post on X (formerly Twitter). The system calculates with a 40% probability who is likely to unleash another wave of negativity. It’s like someone showing up at a party where security is already waiting at the entrance because they “know” you’re going to ruin everyone’s fun.

However, we cannot ignore the positive aspects of artificial intelligence: it helps solve complex problems and makes people’s lives simpler and more convenient. Doctors use it to diagnose diseases in their early stages, scientists find new treatment methods, and engineers develop smart cities.

Artificial intelligence has indeed improved our lives in many ways, and its impact is hard to overestimate. But we must remember that with every improvement, we become increasingly dependent on the systems that surround us.

The Line Between Protection and Paranoia

Everyone has dreamed of a utopia, a world where there is a Neuroprophet made of code and data, but have we not sold our freedom for mythical safety? Do each of us have sufficient grounds to trust the predictions of neural systems?

The city drowns in neon lights, like the pages of a dystopian novel where every human step has long been calculated by the system. Now each of us is merely a point in an infinite stream of information, and the machine observes who we are and who we may become.

Is there life on Mars, or is there no life on Mars? Join the eternal debate on paradoxes!

Thank you!

smile

Similar articles | Technologies