ChatGPT and Risk Management: What’s the Word?

If you’ve watched the news or been online even for a few minutes in recent months, you’ve heard the hype surrounding the newest artificial intelligence (AI) on the scene: ChatGPT. While AI development has been growing steadily, the popularity of this bot has everyone wondering where the next impact will be and if AI can really edge out human performance.

Given ChatGPT’s ability to interact conversationally, as well as use AI to analyze historical data and current trends, security professionals may be wondering if ChatGPT can be used in risk management and critical communications. While AI plays an important role in critical event management, it's not the same type of AI used in ChatGPT. Security leaders and crisis management experts need to understand the differences so they can responsibly protect their people and operations.

What Can ChatGPT Do?

ChatGPT is a form of large language modeling that collects sets of words and uses them to train a neural network to learn about a subject. The AI algorithm generates the output based on request parameters.

In layman’s terms, you enter a topic and/or ask a few questions, and ChatGPT scans its data and produces a new piece of writing. Given the volume of what’s available online, this technology has the potential to save a lot of time, both in gathering information and summarizing it. But can it be relied upon to detect risk or write critical communications during emergencies?

Where the Challenges Lie

Currently, ChatGPT is limited to data collected up to 2021. More concerning, ChatGPT may produce answers that sound plausible, but are actually incorrect because the technology doesn’t have the ability to fact check or validate sources. Therefore, it’s challenging for ChatGPT to assess and provide validated content on a crisis in real time.

For security professionals, ChatGPT poses a hazard shared by all new technology — the potential for intentional harm. Cyber-criminals are already using its generative capabilities for phishing and malware. As the technology develops, risk managers will have to weigh the pros and cons of its use.

In another example, a university in Nashville, Tennessee used ChatGPT to compose an email to the student body about a school shooting in Michigan. The email called for compassion and inclusivity, and then concluded with a reference to ChatGPT. Many recipients were offended by the use of AI to compose a communication about such a sensitive matter, resulting in significant bad press for the university.

The lesson for risk managers? AI can serve a purpose in alerting leadership quickly. However, when it comes to keeping people safe and informed, it’s imperative to review the information and any alerts slated to go out to employees, residents and other stakeholders during a crisis. Exclusive reliance on AI like ChatGPT could put people at risk. It’s not reliable, it’s not authenticated, and it hasn’t been developed specifically for use in risk management.

What You Should Look for in an AI Vendor

In today’s complex threat landscape, a solid risk management strategy blends human discernment with AI developed specifically to detect and analyze risk. Both are vital to proactively mitigate threats and minimize damages.

So, what are the key features that distinguish AI for risk management from technology like ChatGPT?

Above all, AI for risk management delivers data that’s specific and relevant. It looks for particular information and filters out extraneous data to identify events that could impact your people and operations. AI developed for risk management determines that more effectively because:

    • It’s focused on the physical threat landscape, not generic solutions like ChatGPT. It’s essential to have accurate data on the dynamic risk landscape—not just a singular event in time, or only based on limited data up to 2021. This can be combined with other factors, such as locations of assets, and enable updates in real time as events unfold.
    • It leverages established domain expertise in risk management, including AI and language processing (i.e., the ability to understand the difference between a chocolate bomb and an actual bomb, or the state of Georgia vs. the country of Georgia).
    • It’s based on solid research into models and technology tailored to specific industry needs. The technology can provide suggested information that's relevant to your industry - healthcare vs. manufacturing vs. financial services, for example.

Knowing these defining characteristics, how do you find a system that gives you all of them and a means of achieving better outcomes?

Why Actionable Intelligence Is the Answer

Actionable intelligence provides an accurate picture of events that have the potential to impact an organization’s people and operations. The ultimate goal is to help leaders make more informed decisions and take steps to limit threats and reduce damages.

ChatGPT operates exclusively based on its AI algorithms. By contrast, actionable intelligence is achieved by leveraging human discernment in combination with the data obtained by the AI. Remember, the AI engine is only as effective as the data it's ingesting. It's essential that you can trust where the data originates. To accomplish that, a system capable of providing actionable intelligence will:

  • An AI vendor with a tested process: To ensure the data is relevant, it must go through a vigorous process of authentication, correlation and analysis. First, the AI takes in raw data and cleans it to remove extraneous information. Identified threats are classified, and the data is geo-parsed to determine the location of the events. The information is then clustered into event profiles to prevent duplication and excess notifications. Decision-makers are notified of impactful events in real time, based on criteria unique to their organization. Context-driven severity levels are assigned and updated as new factors come into play so leaders can assess the big picture and stay on top of fluid situations.
  • Focus on quality: Starting with a diverse set of global sources, data is pulled from governmental and nongovernmental agencies and news outlets. Expert data scientists who specialize in machine learning vet these sources, so the AI is continuously fed quality data. New sources are continuously validated and added, so the stream of data is kept refreshed and isn’t limited to a specific time.

Consider this example: A severe weather event is detected in proximity to a manufacturing plant along a main supply route. AI can triangulate warning signals and notify decision-makers. This gives leaders the opportunity to shift operations and reroute shipments in advance, so employees are kept safe and business disruptions are minimized. AI doesn’t make the decisions for the leaders, and it doesn’t compose the emergency alerts. Rather, it provides the information necessary to take proactive steps and deliver the right messages to the right audience.  

The quality of and speed at which you receive information can determine how well your organization can navigate a critical event and keep people safe. OnSolve Risk Intelligence prioritizes the use of trusted, human-validated data sources in its AI engine to ensure you can mitigate risk and protect your most valuable assets whenever a crisis strikes. Risk managers receive the actionable intelligence needed to make better decisions. And when it’s time to reach your people, our library of alert templates streamlines the critical communications process while keeping messaging within human control, where it belongs.

Learn how your organization can benefit from a critical event management solution supported by AI that’s trustworthy, responsibly sourced and human-validated.

OnSolve

OnSolve® proactively mitigates physical threats, allowing organizations to remain agile when a crisis strikes. Using trusted expertise and reliable AI-powered risk intelligence, critical communications and incident management technology, the OnSolve Platform allows organizations to detect, anticipate and mitigate physical threats that impact their people and operations.