Managing Cybersecurity in the Age of Artificial Intelligence

Generative AI is promising to be the most disruptive technology since the internet. According to Accenture’s modeling of workplace impact, 44 per cent of working hours in the United States are in scope for automation or augmentation with this technology. Meanwhile, while studying outputs (the amount of goods and services produced), we found that 92 per cent of companies expect productivity gains of over 11 per cent in the next three years. With that said, 67 per cent of chief executive officers said security is a major consideration impacting the adoption or implementation of generative AI.

CEOs are right to be concerned.

While generative AI is democratizing access and driving productivity, it is also leading to insecure deployments and AI-powered threats. Malicious large language models (LLMs), such as FraudGPT and PentestGPT, are creating content to facilitate cyberattacks. As noted in an Accenture analysis of generative AI’s impact on the threat landscape, these LLMs can be purchased for as little as US$200 a month on the dark web. Also consider that in less than a year since the launch of ChatGPT in November 2022, phishing attacks have surged by 1,265 per cent, according to a SlashNext study. Meanwhile, as stated in Onfido’s latest identity fraud report, deepfake attempts leveraging generative AI jumped 3,000 per cent year-over-year in 2023. Accenture’s Cyber Intelligence researchers also observed a 223 per cent rise in the trade of deepfake-related tools on major dark web forums between Q1 2023 and Q1 2024.

Easy access to generative AI applications means employees are unfettered in using AI free of charge and without proper guardrails or governance. Even those who aren’t as familiar with the applications can quickly get up to speed through user tutorials on social media or other public channels. Unauthorized use of generative AI by employees can lead to damaging security breaches. According to research from Gartner, 41 per cent of employees were using or creating technology without their IT department’s knowledge in 2023, and this number is expected to increase to 75 per cent by 2027.

When employees use generative AI without IT oversight or approval, they risk unleashing a torrent of cyber threats that can cripple an organization. Businesses must recognize that the cost of a single careless action could far outweigh the gains. Vigilance is not just advisable—it is essential to ensure that generative AI’s productivity benefits are not overshadowed by potential business losses.

“When employees use generative AI without IT oversight or approval, they risk unleashing a torrent of cyber threats that can cripple an organization. Businesses must recognize that the cost of a single careless action could far outweigh the gains.”

To fully realize competitive gains, businesses must elevate their cybersecurity strategies to address gaps and vulnerabilities—both within and outside their organizations. From our client experience and research, we believe companies must adopt a proactive, multi-pronged approach that calls for modernizing their security setup, heightening awareness, and building a trusted technology foundation that resonates with managers across all departments and functions.

EXPOSING VULNERABILITIES

As generative AI intensifies competition for labour productivity, it is also opening new vulnerabilities that can be exploited by bad actors:

  • Insecure deployment of generative AI: Generative AI models often operate as black boxes, making it difficult for organizations to understand or control their inner workings. In addition, our research into cyber resiliency from late 2023 shows that only 17 per cent of chief information security officers work closely with AI and data teams to secure generative AI. This lack of transparency and collaboration with security teams leads to two issues. First, organizations struggle to identify potential cyber risks that could help them adopt appropriate security measures. Second, it compromises the security of key generative AI pillars—cloud, data and AI, and applications—which leads to insecure deployments. Consequently, this increases the risk of threats such as model disruption, data poisoning, and manipulation.
  • Excessive accessibility: Accessibility, a desirable attribute of generative AI, can also be its shortcoming. While the democratization of generative AI empowers organizations to enhance productivity, it also provides powerful tools for external threats. This accessibility enables the creation of more sophisticated attacks, including deepfakes, advanced phishing schemes, automated malware, misinformation campaigns, and synthetic identity generation.
  • Lax experimentation: The proliferation of a broad set of generative AI applications aimed at various use cases (coding, media generation, meeting assistants, and research aids) and the low barriers to use them encourage experimentation, often without the necessary technical expertise. In an enterprise setting, this experimentation can lead to employees independently trying out generative AI tools or models to solve specific problems or enhance their work, bypassing traditional IT channels. This unregulated experimentation could expose sensitive organizational data on a public or insecure generative AI application and thereby significantly increase the risk of data exposure.
THREE WAYS TO IMPROVE CYBER-RESILIENCE

As generative AI continues to advance, so does the sophistication of cyberattacks. Headlines filled with admiration for generative AI’s ability to expedite intricate and expensive tasks like drug discovery can appear in the same news cycle as reports of ChatGPT’s software leaks. Recent research in the WEF Global Cybersecurity Outlook 2024 report conducted in collaboration with Accenture shows that 56 per cent of respondents believe generative AI will provide an overall cyber advantage to attackers over defenders. This underscores the utmost importance of integrating security measures from the very beginning of deployment and throughout AI’s use to protect against evolving threats and unintended adverse consequences.

Responsible AI for generative AI is a fundamental principle for enterprises, encompassing deliberate efforts to design, deploy, and utilize AI in a manner that generates value while prioritizing trust and mitigating potential risks. It begins with a set of governing principles, which each organization adopts and then enforces. In our experience, companies can employ the following strategies to bolster their cyber-resilience:

Modernize security setup: The cloud has paved the way for technology democratization. Employees now have easy access to cloud-based technologies such as generative AI. While this is providing a measurable boost to productivity and innovation, it is also exposing organizations to security breaches. The first step toward building a resilient, generative AI program is to modernize security at the same pace as business innovation. This requires a strategy that reinforces security and governance from the outset. Our research shows that 89 per cent of C-level executives recognize that they need to completely or significantly change their security strategy to defend against modern threats. To effectively counter these threats, companies must implement strategies that earn the trust and buy-in of all department leaders—from legal and regulatory to security, human resources, and operations. They can undertake five critical activities to underpin such strategies: (1) form a cross-functional team to oversee the responsible use of generative AI with regard to adoption, audits, management, and security; (2) develop a set of guiding principles for generative AI governance that are specific to business needs and priorities; (3) set up clear generative AI use policies and enforce supporting controls, in collaboration with department managers, for specific use cases; (4) define clear roles and shared responsibilities to integrate generative AI into existing legal and compliance processes; and (5) collaborate with governments and industry peers to shape forward-thinking cybersecurity policies that ensure the responsible and secure deployment of generative AI.

Heighten awareness: Half of CEOs are concerned about employees exposing sensitive data, according to our research. Role-based training and awareness campaigns are vital to managing the human risk to data loss and exposure. Department managers must collaborate with existing training and compliance teams, integrating generative AI-specific education into ongoing programs. Such training must educate employees on the potential risks associated with generative AI in their areas, with a strong emphasis on fostering an ethical culture as well as a focus on how threat actors may use AI to improve social engineering tactics, including building targeted deepfakes. Key focus areas include data privacy, security measures, and adherence to established policies and procedures.

To ensure that responsible AI is a continuous component of daily employee activities, organizations should implement training throughout the year, as annual compliance training is insufficient and dated. The evolving threat landscape paired with the rapid pace of innovation in the AI space results in frequent changes that necessitate a continuous awareness campaign, combining traditional training materials to ground employees on key concepts with shorter, easier to digest, and more regular knowledge bursts like blog posts, newsletters, and FAQs. By equipping employees with knowledge and skills to identify and address potential security issues, the organization can significantly enhance its overall security posture. Emphasizing responsibility and accountability encourages informed decision-making, creating a work environment where ethical considerations are integral to AI-related decision processes. Doing this can infuse the practice of safe generative AI usage across functions and operations—from payroll to supply chains to customer relationships.

Innovate from a trusted foundation: Securing AI requires a layered approach—there is no quick fix. The lack of transparency in the integrated security features or training data of publicly accessible generative AI applications like ChatGPT and Bard makes it imperative that organizations develop and manage their own applications by fine-tuning foundational models. Companies can fortify their generative AI applications by securing the underlying cloud foundation, implementing data and robust identity and access management controls. Additionally, integrating generative AI applications with the organization’s threat monitoring, detection, and response capabilities is vital. Invest in AI firewalls, which utilize intelligent technologies to enhance the detection of sophisticated threats. Unlike most current firewalls, which rely on static rule databases, AI firewalls continuously optimize threat detection models using real-time data flows going in and out of your organization’s network.

A firm should also conduct a red team exercise on its generative AI application by subjecting the generative AI model to simulated attacks, which allow it to identify and address vulnerabilities before malicious actors exploit them. This includes testing generative AI against adversarial prompts and prompt injections to bolster resilience against threats.

Finally, generative AI also offers a chance to reinvent cybersecurity, turning the tables on attackers by improving defense capabilities. Traditional security measures alone cannot counter AI-driven threats, so organizations should adopt AI-powered defenses. As cybersecurity departments secure generative AI use and development, they will benefit by turning inward and leveraging AI tools to help them address repetitive and time-consuming tasks ranging from log analysis and configuration reviews to empowering security analysts with insights and guidance on incident response as well as enabling user-friendly dissemination of security policies through chatbots. Platform companies and hyperscalers are already deploying AI-driven security features across their ecosystems and beyond. For example, Accenture’s MxDR service, powered by Google Cloud’s security-focused generative AI, integrates seamlessly with various security environments and clouds, enhancing detection, response, communication, and adaptation to global threats.

Generative AI can be a double-edged sword. On one hand, it promises to have a huge impact on the global economy, potentially increasing GDP by 7 per cent in the next ten years, according to Goldman Sachs. On the other hand, it could also unleash continuous and unpredictable cyberattacks. By fostering a culture of collaboration, transparency, and proactive governance, we can shape a future where innovation thrives within the boundaries of security and ethical responsibility. Only then can we truly unlock the full potential of generative AI while safeguarding against the shadows that lurk in the realm of this technology.

The authors would like to thank Gargi Chakrabarty, Periklis Papadopoulos, Manav Saxena, and Neethu Eldose at Accenture for their contributions.

About the Author

Paolo Dal Cin is the Global Lead at Accenture Security. Contact: paolo.dal.cin@accenture.com

About the Author

Valerie Abend is the Global Cyber Strategy Lead at Accenture Security. Contact: valerie.abend@accenture.com

About the Author

Daniel Kendzior is the Global Data and AI Lead at Accenture Security. Contact: daniel.kendzior@accenture.com

About the Author

Yusof Seedat is the Global Research Lead at Accenture Security. Contact: yusof.seedat@accenture.com