With non-invasive solutions, legal precedents and global trend signals, authorities and organizations already have a clear and safe path to apply Artificial Intelligence in crime prevention, forensic investigations and corporate governance.
Corruption, environmental damage, money laundering, fraud, extortion and other criminal modalities already well-known and typified today gain scale with cyber-attacks, the on-demand formation of gangs in the virtual environment, in addition to new configurations of the workforce and chains. of value. Although the essence of risks and delinquency are similar, the magnitudes completely change the way to monitor and investigate. The cinematic cliché of investigators, amid boxes of Chinese food, poring through tomes of archives or exhaustively watching a video is already as anachronistic as an astronomer mapping stars with a spyglass. Big data, analytics, machine learning and other modalities of Artificial Intelligence redefine the way of doing science. In the case of forensic science, the speed of discoveries can determine the well-being of a community, the sustainability of an organization, the preservation of an ecosystem or the lives of people.
Artificial Intelligence modalities and applications are in everything – city automation, self-driving cars, conversational interactions and a multitude of use cases. Of course, each of these areas has its objectives, challenges and uncertainties. In the areas of public safety, surveillance, governance, crime prevention and forensic investigation, however, work is urgent. And there is already a lot to be done with the technologies and references of global best practices available, without entering the gray areas of ethical limits and regulations that are yet to be established.
More comprehensive, correct and auditable investigation
While emphasizing the guarantee of fundamental rights, the OECD’s AI Policy Observatory recognizes the role of technology in accounting for complex and time-sensitive investigations.
Even without delving into the technical details, it is interesting to see how AI modalities are applied in the fight against crime and what points of attention for law enforcement authorities, intelligence agencies or auditors.
The branch known as Big Data is the part that allows capturing data from different sources, with correlations and insights. Along with this gain in scale and speed, AI also brings the ability to analyze images, make translations in real time and, in short, transform different types of content into relevant information. The technological infrastructure, tools and services aspects are already relatively mature. Where to start, with less risk and greater return, is the challenge.
Datasets, looking where threats and clues hide and proliferate – defining the set of data sources and creating the query mechanisms is complicated in itself. In the fight against crime, in addition to the exposed information (such as external signs of enrichment, for example), it is necessary to look at what was done so as not to be seen, such as cells of criminal organizations on the dark web.
There are certainly cases where the police authority or governance managers need heavier investigative and legal instruments. But journalists, detectives, and other experienced investigators get most of their results from the evidence that “is there,” for those who know how to find and understand.
Customization, no personalization – as in the definition of datasets, the AI structure, in the cases highlighted in this article, must also be flexible to meet the different priorities and criteria for handling information. On the other hand, just providing big data tools, ML and other resources may be insufficient. In addition to facilities to streamline projects (with automated integrations, user-friendly interfaces and cloud infrastructure), at this point it is interesting to take advantage of what the provider of applied AI solutions can bring in terms of Best Practices. More than a technical simplification, SaaS (software as a service) can be a great facilitator to go where you can and not go where you shouldn’t.
Explainability, transparency and AI accountability – process traceability with AI intervention is a common principle of early regulatory initiatives, but the subject is very broad. The expertise of a self-driving car accident; the justification of a denial of credit; or a criminal investigation, for example, have very different demonstration requirements.
The format of exposition of the processes varies, both by the nature of the activity and by the decision of the sectorial and regulatory forums. In some research communities, for example, findings are only recognized with documentation of data, algorithms, and the code itself. In other cases, such as market intelligence, the use of proprietary algorithms should be allowed, provided that the data treatment criteria of privacy and personal data protection legislation are respected.
Of course, there are situations where the secrecy of the AI engine is critical. In the case of suspected criminal actions, the monitoring itself has to be conducted discreetly, so as not to abort the investigation. However, even though it operates with proprietary algorithms, the ability to demonstrate the processes of capturing and processing data is essential for the quality of evidence and the best outcome.
Non-invasive architecture is the best vaccine for compliance risks
Regulators, the legal and IT community still have a fresh memory of the transformation of big data – it had become so cheap and easy to ingest data, to the point that legislation protecting personal information, such as the GPDR (general protection regulation data) of the European Union, which in turn typified and made abusive models unfeasible.
The impacts of AI more quickly aroused the attention of public agents, industries and society. With a very short discussion time compared to privacy regulations, which took more than 10 years between the proposals and the effectiveness of national legislation, legal frameworks and sectoral arrangements are already beginning to be defined.
Regulatory initiatives are incipient and are still focused on general issues. A reference that, although generic, serves as a good strategic guideline are the Five AI Principles listed by the OECD.
Despite the sense of priority of the agents involved, there is much to discuss regarding the ethical, legal and regulatory limits of AI. It is not even certain yet whether the path will be general laws, sector regulations or a combination of both.
While all this is going on, the task of defending people, goods, reputations, the environment and everything else that is targeted by crime cannot wait. There’s a long way to go. But you already have a clear and secure notion of what can and should be done for a smooth journey.
Author: Vanderlei Campos for Voyager Labs