Can artificial intelligence really predict crime?
Recent advances in artificial intelligence have sparked debate about its potential use in predicting future criminal behavior. As AI technologies grow in scope and precision, a critical question emerges: can machines anticipate human decisions—especially those related to crime—before they happen?
The historical roots of biometric profiling
The idea of predicting crime based on physical features is not new. In the 19th century, criminologist Cesare Lombroso suggested that certain facial traits could indicate a predisposition to criminality. Though long rejected by science, this thinking resurfaces through AI systems trained on biometric data, such as facial recognition and body analysis. Some developers believe these tools can detect subtle behavioral cues—like expressions, gestures, or posture—that might suggest risk. However, using biometrics in this context brings ethical concerns, especially when tied to law enforcement or surveillance practices.
The complexity of human behavior
Human actions are influenced by a combination of emotional, social, and environmental factors. Predicting crime is not like forecasting the weather. While AI excels in structured environments with clear data patterns, criminal behavior resists such classification.
Attempting to predict an individual’s future based on their biometric profile risks reducing people to data points. It can also reinforce bias, especially if algorithms are trained on flawed or incomplete datasets. This raises questions about fairness, accuracy, and accountability.
Ethical and technical boundaries
Even with advanced learning models, AI remains a tool shaped by the information it is given. When that information includes biometric characteristics, care must be taken to ensure it is not misused. Relying too heavily on these systems for decision-making could undermine fundamental legal principles, such as the presumption of innocence. Instead of trying to foresee criminal intent, there may be more value in using these technologies to improve public safety through better resource allocation or response planning—without targeting individuals based on predictive scores. Developing guidelines around the responsible use of AI and biometrics is essential. These discussions must include experts from law, science, and civil society to avoid repeating historical mistakes with new digital tools.
Source: https://www.biometricupdate.com/202504/can-ai-predict-who-will-commit-crime