Axon is committed to leveraging AI innovation ethically to revolutionize community safety while prioritizing the mitigation of biases and risks. This approach ensures that AI serves as a force multiplier, enhancing the capabilities of public safety professionals while preserving the critical role of human decision-making. Simply put, AI should augment operations, never replace the human.
In this article, we will explore several key considerations for law enforcement agencies who want to leverage AI in policing in a responsible way. Let’s get to it.
Responsible AI Development: A Methodical Approach
As a leader in the public safety space, Axon has carefully developed a methodology that equally considers ethics and efficacy when it comes to leveraging artificial intelligence. Axon’s Ethics-by-Design methodology guides the responsible development of AI solutions:
Problem-Centric Approach: We always start with a problem that needs solving when we invest resources into developing AI technology, and we prioritize non-AI solutions when those prove effective.
External Collaboration: Engaging diverse external experts ensures fair and equitable AI development.
Risk Assessment and Mitigation: Risks are identified and mitigated throughout the design process, prioritizing user safety and fairness.
Diverse Data Training and Evaluation: AI models are trained and evaluated with diverse data to ensure performance across various scenarios and demographics.
Human Oversight: Safeguards are built into AI technology, ensuring human decision-making in critical moments.
Continuous Improvement: Rigorous testing and ongoing monitoring post-release ensure fairness and accuracy.
Learn more about our Responsible approach to AI development.
Civilian vs Public Safety AI Tools
Axon designs our AI solutions with public safety in mind, building secure, efficient, and purpose built solutions.
Draft One is a great example when comparing the qualities of a public safety grade solution with a consumer grade one.
Key distinctions between our solutions and consumer-level generative AI include:
Data Security: Your data is completely secure within the Axon network. On the occasions where we need to test with real customer data, we request permission to enroll them in our voluntary, privacy-centric program — all along working within the confines of our data sharing agreement.
Hallucination Reduction: The underlying model we used for transcription is Open AI’s GPT-4 Turbo, and we calibrated the model to prevent speculation or embellishments. Draft One reports stick to the facts and require officers to review and add any missing pieces of information.
Safeguards: Draft One requires users to insert info that is missing. An officer cannot copy and paste the text until they have reviewed and updated every insert statement. Once all the insert statements are updated, officers must sign off on the narrative’s accuracy and attest to their ownership of the report before submission.
Tailored for Law Enforcement: Draft One is a closed system with industry-specific safeguards, trained to write great police reports using the best resources available.
Integrated with the Axon Ecosystem: As soon as an officer stops recording with their Axon Body 3 or Axon Body 4 camera, the audio from the recording begins loading to the cloud via LTE.
Learn more about the differences between Draft One and consumer grade generative AI tools.
Current Applications of AI in Policing
AI is already integral to various aspects of policing. For instance, in the following video, Lafayette PD share how solutions like Draft One are saving officers hours and alleviating burnout: