Generative AI poses unprecedented risks and opportunities in cybersecurity, report states

Microsoft’s 2024 Digital Defense Report reveals how generative AI is reshaping the cybersecurity landscape, presenting both unprecedented opportunities and emerging risks. From the words of Shawn Bice, Corporate Vice President of Cloud Ecosystem Security, in the report to words in the report itself: Kazinfrom News Agency correspondent delivers the findings.

AI
Photo credit: publicdomainpictures.net

The transformative power of AI in cyber defense

“We are at the start of what could become one of the most transformative technological eras in modern history. Much has been said and written about how AI can have a significant effect on every industry, but the impact it can have on how businesses secure their most important data and assets in the face of ever-increasing cybersecurity threats will be one of the most critical uses of this technology,” says Shawn Bice.

Emphasizing generative artificial intelligence as "one of the most impactful technological shifts of the past several decades," the publication notes its transforming power. Generative artificial intelligence is poised to "create entirely new processes" as well as be included into thousands of current systems and corporate processes given its wide spectrum of uses. Generative artificial intelligence already shines in jobs like "customer service rep," or "math teacher," and in duties including summarizing and interpreting natural language data. Future developments in artificial intelligence "agents" could include memory and more autonomy to react to triggers outside user inputs.

Still, it also highlights important security issues, stating that although "building is easy; testing is hard." Unlike conventional software, which mostly concentrates on functionality, generative artificial intelligence development calls for extensive testing to accommodate diverse, erratic user inputs. Furthermore, generative artificial intelligence security is said to be "nondeterministic," with vulnerabilities resulting from phrase variances that can change AI behavior and perhaps inspire "jailbreak" harm. Viewing AI security as human validation, supporting thorough testing, adversarial evaluation, and "metacognition"—having one AI analyze another's outputs and include human oversight—helps one address these concerns.

System threats in AI deployment

“Organizations of all sizes around the world are facing the same challenges: infinite amounts of data to manage, more endpoints to secure, and a shortage of talent to operate security environments that are becoming more complex every day. Cybersecurity is a top priority for businesses of all sizes, but at the same time, cybersecurity is an infinite game that has no winner and no end. Defenders must constantly be vigilant as the landscape becomes more intricate,” Shawn Bice pointed out, adding that “With threat actor adoption of AI, the economics and sophistication of attacks are changing rapidly, and with that, the sophistication of how we must defend. Generative AI is ushering in a new era of cybersecurity that can put defenders one step ahead of threat actors. The adoption of large language models (LLMs) tailored for security operation scenarios will see a shift from humans having to write manual automation of repetitive tasks to AI systems capable of detecting and investigating security threats at the skill level of security professionals.”

The report highlights three main system risks generated by generative artificial intelligence: content exposure, overreliance, and system compromise. System breach via cross-prompt injection attacks (XPIA), sometimes known as indirect prompt injection, is a major concern when external data—such as emails or documents—contains malicious payloads. Using big language model (LLM) prompts, this method lets attackers maybe run commands, take over systems, or steal data.

A further major risk is overreliance on artificial intelligence. Users may often "overrate the reliability of AI output," the report notes, resulting in four kinds of overreliance: naive (lack of awareness of AI's limitations), rushed (inadequate verification), forced (inability to check, such as in vision augmentation), and motivated (using AI output to justify preconceived actions). Good mitigating calls for setting clear rules and improving user experiences.

It also emphasizes content exposure hazards, particularly in regard to operators being exposed to damaging materials like hate speech or content on child exploitation. Among the solutions are meta questions and filters from Azure AI Content Safety to restrict access. Looking ahead, Microsoft expects waves of AI-driven fraud, political meddling, and impersonation attacks—which compromise "the most vulnerable targets—humans." Governments especially are advised to give protections for people without access to automated security systems top priority.

A double-edged sword in targeted attacks

“AI can help develop a thorough understanding of a security incident and how to respond in a fraction of the time it would take a person to manually process a multitude of alerts, malicious code files, and corresponding impact analysis. Not only can this significantly reduce the time to identify, investigate, and respond to an incident from days to minutes, but this AI-driven threat analysis provides the opportunity for security teams to learn and train in real-time, helping to reduce the skills gap and freeing up experienced analysts to focus on more important tasks,” Shawn Bice stated.

According to Microsoft's analysis, hackers seeking to take advantage of efficiency in targeting high-value individuals now find great use for artificial intelligence. The research notes, "Behind every bot is a real person," yet artificial intelligence's fast data-processing capacity lets attackers find profitable targets with never-seen speed. As Microsoft points out, the competitive environment of artificial intelligence in cybersecurity has spawned a race; "whatever party masters AI faster will have a near-term advantage." Still, spotting and stopping AI-driven targeting is more difficult even with AI-enabled defenses.

AI's offensive powers include avoiding common human mistakes and automating chores that once took human operators months or years. For example, AI-enabled spear phishing and whaling techniques might mix exact targeting with latent malware. Using device cameras, speakers, and GPS, the AI verifies its target before deploying, potentially exfiltrating sensitive data before detection. Attackers can also utilize "résumé swarming," creating fake candidates catered to job openings, hence flooding hiring systems with "perfect" AI-generated résumés. These, when combined with deepfake technology, take advantage of social engineering at unprecedented levels, therefore widening the threat scene as artificial intelligence develops and adaptable to avoid protections.

This all boils down to the need of “The deployment and utilization of AI and agents will be vital,” as Shawn Bice emphasizes, “especially with threat actors becoming more sophisticated in their tactics every day.” He additionally states that “History has shown, technology can have the ability to elevate our human potential, and through innovation, collaboration and responsible use of generative AI and agents, defenders will be positioned to take on cybersecurity’s toughest challenges and work toward making the world safer for all,” the issues of cybersecurity remain substantial nevertheless.

Currently reading