Implications of AI for Cyber Defense

The emergence of ChatGPT and other publicly-available generative artificial intelligence (GenAI) has ushered in a boom in interest, development, and adoption of AI. Many organizations are exploring how they can integrate AI into their business and the potential benefits that it could provide. (And frankly, those that haven’t, should be!) 

One of the potential applications of GenAI — and AI in general — is for cyber defense. Many security companies have been integrating AI and machine learning into their products for years. However, the rapid evolution of GenAI introduces various new security applications and opportunities.

How AI is Used for Cyber Offense and Defense

AI and GenAI have broad potential applicability across many industries. In the field of cybersecurity, they can be applied both defensively, and to help cybercriminals and penetration testers to perform their attacks.

Implications of AI for Phishing

Phishing is currently one of the most common cybersecurity threats that companies face: the FBI’s Internet Crime Report 2023 identified phishing as the number one complaint received for the third year in a row. In many cases, it’s the way that attackers achieve an initial foothold on a target network, before moving laterally to compromise other systems.  

Often, modern anti-phishing training teaches users to look for certain “red flags,” such as grammatical errors or something that doesn’t look or sound quite right. GenAI is now being used to develop much more sophisticated, authentic-sounding phishing emails that eliminate many of these red flags. A cybercriminal can easily generate email copy that not only is grammatically correct, but also is optimized to maximize the probability that the target will click on a link or open a malicious attachment. 

AI on the Defense against Phishing

However, the news about AI and phishing isn’t all bad. AI can also be used to enhance defenses against phishing attacks in multiple ways, including: 

  • Text Analytics: Natural language processing (NLP) enables AI to read and understand emails like a human would. This can aid in the identification of psychological techniques used by phishers, such as creating a sense of urgency in the intended target. These emails can then be given an appropriate risk rating as a warning. 
  • Behavioral Analysis: The most effective phishing emails often involve the attacker masquerading as a trusted party. AI can analyze email tone, communication patterns between accounts, and similar information to identify anomalies that may indicate a phishing attack. 
  • Malware Analysis: Phishing emails are often intended to deliver malware to the target computer. AI can be valuable in EDR solutions that analyze payloads in a sandbox environment to detect malicious functionality before it can be delivered to its intended target. 

AI for Threat Detection and Response 

One of the most significant challenges that security teams face when performing threat detection and response is parsing through massive amounts of security alert data to identify true threats. Often, effort is wasted on false positives or alerts are ignored due to overwhelming volume and complexity. 

AI is ideally suited to analyzing masses of data to identify patterns and anomalies that could point to an attack. For example, analysis of login attempts could detect “unauthorized access” scenarios, wherein two of a user’s login attempts are made from locations that are too far apart to travel in the time between them. 

If a security team identifies a true threat, AI can also be valuable in investigation and response. Root cause analysis — the identification of the original cause of a security incident — is often one of the most laborious aspects of incident response. It also is a task that can be partially or wholly automated as part of an AIOps platform. 

AI can also contribute to speeding up the recovery process. GenAI can automatically generate remediation playbooks after a threat has been identified. AI can also automate the execution of these playbooks, enabling rapid remediation after a human analyst has signed off on the suggested actions. 

Expedited, Scalable Threat Hunting

Not every cyber attack is identified and blocked in time. In some cases, cyber threat actors gain access to an organization’s systems and may stay there for days, weeks, or even years. According to a 2023 report, it takes organizations an average of 204 days (over 6½ months!) to identify a data breach, and an additional 73 to investigate and contain it. That’s a long time. 

Threat hunting can help with identifying these resident threats but, like threat detection and response, must weed through huge quantities of security data looking for anomalies and patterns. While human analysts often lack the resources to do so effectively, AI can leverage indicators of compromise (IoCs), threat intelligence, and advanced analytics to reveal these hidden threats. 

If an intrusion is identified, AI can also help with extracting and disseminating IoCs for that threat. For example, AI could be used to define key behavioral indicators of a piece of malware or describe a phishing email so that others can identify and block it. By doing so, it streamlines threat hunting for other victims and may even enable them to block the attack before they are compromised as well.

Simplifying User Interaction 

One of the defining features of GenAI is its ability to have conversations. Tools like ChatGPT, Claude, and Gemini can have convincing discussions on a variety of different topics. 

From a security perspective, this provides security analysts with the ability to “talk to” to their data. This can be invaluable for investigating cyber attacks or performing threat hunting. 

GenAI can also be useful for simplifying an organization’s regulatory compliance efforts. With access to an organization’s security data, a secure GenAI engine running in-house could answer users’ questions or even fill out the questionnaires and forms needed for compliance reporting.

Preparing for AI-Driven Cyber Defense

AI has the potential to revolutionize cyber defense. It can improve phishing detection, manage the avalanche of security data, and provide greater visibility into an organization’s current security data and posture. 

However, AI also introduces security risks for the organization as well. Some of the most significant include: 

  • Bad Training Data: AI solutions use training data to develop the models that they use to make decisions, classify data, and interact with people. If this data is corrupted (accidentally or intentionally) or otherwise biased, the model may miss certain threats. 
  • Data Leaks: Large language models (LLMs) like ChatGPT may use user-provided prompts to train themselves. This can result in a leak of sensitive data if corporate or customer information is entered into an AI-enabled system. This is a significant risk that should be addressed by appropriate corporate use policies and security awareness training. 
  • Prompt Injection: Prompt injection attacks use maliciously crafted prompts to make LLMs misbehave. These attacks could be used to gain unauthorized access to sensitive data or cause an AI-based solution to miss a threat. 
  • Wrong Decisions: AI isn’t perfect, and the decisions or data that it produces may not be correct. Instance of “AI hallucinations” are well-documented. If organizations are relying on the accuracy of AI outputs without checks and balances in place, these errors could have significant business impacts. 

 

These and other AI security risks make having an AI security policy a priority for any company. Some best practices include: 

  • Implement Defense in Depth: Relying solely on an AI-based security solution risks missed detections. Implementing defense-in-depth strategies reduces the probability of an attack slipping through the cracks. 
  • Have an AI Governance Strategy: Unmanaged use of AI magnifies an organization’s potential attack surface. Implementing an AI governance strategy based on ISO 42001 or similar standards reduces AI security risks and maximizes its potential benefits. 
  • Consider Compliance Risks: Many new regulations and standards have components related to AI governance and security. It’s vital that organizations research their new regulatory responsibilities and implement compliant programs. 

 

AI can be a major boon for a corporate cybersecurity program, but it needs to be carefully designed, deployed, and managed to mitigate risk, maximize ROI, and maintain compliance. For help with managing your company’s AI security posture, contact ISA Cybersecurity today. 

NEWSLETTER

Get exclusively curated cyber insights and news in your inbox

Contact Us Today

SUBSCRIBE

Get monthly proprietary, curated updates on the latest cyber news.

SUBSCRIBE

Get monthly proprietary, curated updates on the latest cyber news.