Law enforcement organizations, which are frequently at the forefront of implementing cutting-edge technology, have once again embraced artificial intelligence as a revolutionary breakthrough. Departments are experimenting with a more ambitious application now that AI-powered audio transcription tools have been successfully integrated: software that can produce thorough police reports.
Draft One is a generative AI tool that Axon, known for its Tasers and body cams, has unveiled to expedite the laborious process of writing reports. Draft One can generate draft narratives in a matter of minutes by utilizing Microsoft’s Azure OpenAI platform and analyzing body camera footage. This allows for a 30- to 45-minute workday reduction. According to The Associated Press, the software promises to cut paperwork by up to an hour every day, freeing up officers to concentrate on mental health and community involvement.
Although there may be advantages to the technology, questions have been raised concerning its dependability and possible biases. Large language models, such as ChatGPT, on which Draft One is based, have drawn criticism for their propensity to produce false or misleading results. Even though Axon says it has made careful adjustments to address these problems, errors and hallucinations are still possible.
In addition, there are valid worries about the possibility of gender and racial biases in reports produced by AI. These biases in large language models have been shown by experts, and using them in law enforcement could make inequality already present worse.
Departments use Draft One in different ways. While some agencies have given officers more extensive access, others have restricted its use to minor incidents. But experts contend that using AI alone is not a viable option due to the potentially disastrous effects of mistakes in police reports.
“The large language models underpinning tools like ChatGPT are not designed to generate truth. Rather, they string together plausible sounding sentences based on prediction algorithms,” Purdue clinical associate professor Lindsey Weinberg who has a focus on digital and technological ethics, told Popular Science.
Weinberg is the director of Tech Justice Lab, and he argues that “almost every algorithmic tool you can think of has been shown time and again to reproduce and amplify existing forms of racial injustice.” AI Experts have documented have overseen many instances of gender or race types of biases in large language models throughout the years.
Weinberg concluded with saying, “the use of tools that make it ‘easier’ to generate police reports in the context of a legal system that currently supports and sanctions the mass incarceration of marginalized populations should be deeply concerning to those who care about privacy, civil rights, and justice.”
The growing application of AI in law enforcement necessitates a careful analysis of the advantages and disadvantages. Technology can increase productivity and streamline procedures, but it must be used carefully to avoid sacrificing accountability, justice, or accuracy.