International Security Conference-West: AI and the Future of Critical Event Management

I had the privilege of attending and presenting at International Security Conference West (ISC-W) in Las Vegas, Nevada, last week. It was an incredible week connecting with colleagues, customers, partners and prospects. When we weren’t commiserating about the sensory overload, we were talking about Artificial Intelligence (AI) and the security implications. The brilliant product team at OnSolve published a great blog this week summarizing the application of AI in critical event management (CEM). Pause here and take a minute to read the blog and check out the webinar by our AI product leadership.

Welcome back! Exciting stuff, right? AI is a force multiplier, a supercharged intelligence lifecycle – a superpower for the good guys. It’s also alarming how easy it can be for AI to be an information polluter, introduce bias and drive unexpected actions – it’s a superpower for the bad guys.

Here are my key takeaways from ISC-W and how practitioners should think about responsible adoption and implementation of AI within their teams and organizations.

Adoption: The Right AI Is Responsible AI.

    • Make sure AI has access to the right data for the desired model application. Consider how the AI solution integrates with existing security infrastructure, including surveillance cameras, access control systems and incident management platforms. Ensure that data flows comply with relevant data privacy regulations (e.g., PII, GDPR, HIPAA) and implement safeguards to prevent unauthorized access or misuse of data.
    • Consider the dangers of asking AI to qualify or judge a situation or set of information. Instead, focus on treating generative AI as single-source intelligence. Verify information via multi-source research and ask AI to summarize facts rather than have it assess incidents on your behalf.
    • Develop strong ethics and governance measures to ensure there are no unintended consequences with AI applications, including bias or discrimination in either the model output or in how our organizations are using that output.

Implementation: A Superpower for all Teams Big and Small

    • Leverage AI to “supercharge the intelligence lifecycle.” I first heard this term from Meghan Gruppo at Google. We can find and evaluate new information or sources faster, process this information with context and analysis more robustly, and leverage AI to generate helpful outlines, creative scenarios and even charts, graphics and images for our intelligence products. Further, we can use AI to quickly summarize large data sets - e.g., provide a summary of the past 200 incidents, and summarize meeting transcripts, call logs and shift notes.
    • Develop internal policies for use so that team members understand the guardrails and can access and utilize AI with peace of mind. This takes practice and requires a walled garden or virtual sandbox for experimenting with generative AI. Check out OnSolve’s Principles for Using Generative AI for Intelligence & Security Teams
    • Think strategically about the role of your team and how it may need to evolve with new AI capabilities and improved efficiency. Take careful consideration for our junior team members who routinely summarize information, track incidents, log information, or conduct foundational research – all activities that AI supplements or replaces altogether. What is the future for career growth in this new age of AI?
    • All models are flawed, and some are more useful than others. Even a model with 99.9% accuracy has a flaw. It's important that AI not be treated as gospel. This is especially true when doing desk research. Validate with multiple sources just as we would with any other single source of information. Adopt AI with change management practices and implement redundancies for high impact incidents.
    • Most generative AI models available today draw information from the public domain and user inputs are added to that public data system. Don't input confidential information, IP or proprietary data into the models without first discussing the risks with legal, risk, and information security partners.

One of the most difficult mandates our risk management and security leaders face is being asked to provide ground truth to our respective leadership. We daily get the question, "What's really happening here?" This is an important and taxing mandate. Without technology, we can't possibly process all the information and data in the world in near-real time to answer this question accurately and comprehensively every time. We rely on teams of analysts or make assumptions based on experience...maybe sometimes we don't know and just have to say, "I don't know, but I'll find out.”

AI is best utilized within a similar framework. Think about the value of information throughout the decision-making cycle. Help AI understand the rules of the road for your organization and give it your C-Suite critical information requirements (CCIR). This way, AI can provide your organization with the right information to the right people at the right time and in the right format.

The mission can feel daunting and the path forward unclear. If you’d like to continue this discussion, provide feedback, or are looking for assistance, OnSolve is here to help.

(Information cut-off date 1000 PT, April 17, 2024)

Nick Hill

Nick Hill is Senior Analyst, Global Risk and Intelligence Services, where he drives intelligence analysis and services implementation to help customers mitigate dynamic risks and strengthen organizational resilience. Prior to his current role, Nick led product development and services implementation for a physical security provider leveraging AI to improve critical incident management. Nick is a former security manager overseeing travel risk management, risk intelligence, and global security operations, and previously served in the Marine Corps overseeing strategic intelligence analysis and production. For more real-time risk and resilience insights follow Nick on LinkedIn.