Balancing AI’s Promise and Risks in Cybersecurity
Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
How to Responsibly Embrace AI’s Potential to Strengthen Cybersecurity Defenses
•
August 14, 2024
The potential for cybercriminals to unleash devastating AI-enhanced attacks conjures up frightening visions of cyberattacks that are bigger, broader and more difficult for organizations to detect and prevent.
See Also: An Executive’s Guide to Operationalizing Generative AI
Luckily, this is not yet happening – as far as anyone can tell.
Undoubtedly, the evolution of generative AI in the last year has unleashed a torrent of eyebrow-raising hypothetical scenarios, but as Verizon’s 2024 Data Breach Investigations Report shows, there appears to be a gap between generative AI’s perceived capabilities and its actual use in cyberattacks. The 2024 DBIR cites skyrocketing gen AI “hype” and very low actual gen AI “mentions” alongside traditional attack types and vectors such as phishing, malware, vulnerability and ransomware.
Despite the hype, it’s still essential for security leaders to focus on AI risks now. Many organizations have already begun to evaluate how AI can be used to improve their cyber defenses, especially in the detection and triage of cyberattacks. “Most people will not even realize when an AI-enhanced attack happens, because AI’s impact is so nuanced,” said Chris Novak, Verizon’s senior director of cybersecurity consulting. “If you’re in technology or the field of generative AI, however, you can see how easy it is to manipulate data to get the desired effects.”
With more than 20 years of experience in cybersecurity, Novak and his team work to support both public and private sector clients. He said that AI-powered algorithms are being used on the “promise” side of the equation to analyze vast amounts of data in near real time, allowing rapid identification of anomalies that help organizations intercept potential threats. The need for speed is clear, as attack incidents can crop up and spread quickly – making a strong case for AI-enabled cybersecurity monitoring tools. “The faster you can respond to an incident, the better,” he said.
AI algorithms can be used to detect patterns and behaviors with a level of precision that surpasses traditional or manual methods. “AI-driven systems can provide more accurate threat assessments by continuously learning from new data and adapting to emerging threats,” Novak said.
AI Threats: Real and Perceived
As organizations work to embrace AI advances, they must also prepare for sophisticated challenges, particularly in areas such as gen AI and deepfake technologies. Gen AI models, for example, are capable of creating convincing fake personae or generating realistic phishing emails, which could enable attackers to target individuals or organizations and evade traditional security measures.
As the 2024 DBIR reports, however, deepfakes appear to be more of a potential AI-driven problem than phishing, because traditional low-tech phishing methods are still working quite well at catching unsuspecting victims. Advances in deepfake technologies have already generated fraud and misinformation situations, according to the DBIR, suggesting that deepfakes are a more immediate concern due to their potential for creating convincing fake content.
AI today presents an early-stage, emerging challenge. “Defenders must learn to adapt their cybersecurity strategies to combat evolving AI-driven threats, while also learning to harness AI/ML to boost defensive capabilities,” Novak said.
Threats From Within
Through its Insider Threat Program, Verizon has been able to establish a baseline of normal behavior, used to help to identify and mitigate risks posed by employees or other authorized users. AI algorithms can then detect deviations or anomalies that may suggest insider threats, such as unauthorized access to sensitive information or unusual data transfers. “If people know there’s a strong and robust insider threat program in place, and their actions are being monitored, that often serves as an effective deterrent,” Novak said.
Consider the scenario of a customer service representative trying to access account information without prior customer approval. “That’s a red flag that would be outside normal actions taken to review user accounts,” Novak said, adding that AI can sift through network data to see those types of behavioral patterns and help quickly connect the dots.
AI also can be instrumental to endpoint detection and response or EDR, Novak said. It can be used today to monitor and analyze endpoint devices for signs of malicious activity or abnormal behavior. With the help of AI and learning from the datasets, the learning algorithms can identify malware patterns, unusual process executions or unauthorized system changes that may indicate a security breach. For example, if a ransomware attack encrypts files on an endpoint device, AI-driven EDR can detect the encryption process and alert security teams to take immediate action to contain such a threat.
AI Governance Best Practices
Using AI without robust governance is like driving a high-powered sports car with no brakes or traffic laws. While a car’s speed and power can create exhilarating possibilities, the need for controls and rules of the road ensures safety and prevent accidents. AI’s powerful capabilities require governance to ensure its ethical and responsible use. Novak said good AI governance should incorporate the following elements:
- A rigorous review process for AI applications, ensuring they adhere to ethical standards and legal requirements;
- Strict access protocols for generative AI tools to help prevent misuse and safeguard data privacy and robust authentication measures to help security leaders monitor and track use;
- Education and awareness programs to help ensure employees understand AI-related risks and what they should know to responsibly use AI tools;
Regular training sessions and updates are also useful to help keep staff informed about emerging threats and best practices.
How to Stay Ahead of Cybercriminals
Organizations should adopt a strategic approach to AI, considering both its benefits and risks. Emerging AI capabilities such as advanced natural language processing and automated security responses are being evaluated across many industries to help security teams improve threat detection and shrink incident response times.
On the flip side, however, the potential for AI-powered attacks, even if they are not yet common, calls for continuous monitoring to stay ahead of any and all emerging threats. Cybercriminals are often early adopters of new technologies to fuel their exploits, putting the onus on defenders to be proactive about understanding and adopting AI-enhanced cybersecurity.
By strategically and responsibly using AI, organizations can strengthen cybersecurity defenses to better manage cybersecurity risks, while preparing for new exploits that leverage AI for cyberattacks. In the end, Novak said, the best defense against cyberthreats “will always be a balanced approach that leverages human ingenuity along with AI’s computational power.”
CISOs and security leaders working to learn more about how to responsibly embrace AI’s potential to strengthen cybersecurity defenses should review Verizon’s latest insights here.
link