The AI balance of power lies with cyber defenders

By Maxime Hambersin, Senior Director of Product Management International,  DocuSign

Amid the rapidly evolving landscape of technology, the symbiotic relationship between artificial intelligence and cybersecurity has become a lynchpin of our digital existence. As we immerse ourselves deeper into the era of automation and machine learning, the potential for innovation is boundless. However, with great power comes great responsibility, and the delicate balance between taking advantage of AI and safeguarding against cyber threats is more important than ever.

It’s a problem that plagues small to medium enterprises that fall under what Andy Steingruebl, chief information security officer (CISO) at Pinterest, referred to recently as the ‘security poverty line’. He shared this during a webinar-based exploration of the future of AI and cyber crime. It refers to those beneath a certain revenue, size, or experience level that can’t afford to do cybersecurity themselves. Some organisations haven’t kept up their cybersecurity with the speed of digitalisation and find themselves facing increased attacks as a result of the mismatch.

The strained balancing act follows an onslaught of cyber attacks in 2022 and through 2023. Faster and more sophisticated, ransomware-as-a-service (RaaS) operations rose by 112% as eCrime adversaries proved their ability to adapt, even in the face of preventative measures. Supply chain attacks have been unleashed, where trust between organisations is exploited so that malicious code can be injected into open-source libraries and other dependencies. RaaS in particular is becoming that much more advanced with the help of generative AI – which is able to fill the role of a virtual, highly skilled hacker.

The evolution (and emerging threat) of generative AI

Although it has already begun to reshape industries and redefine the ways we live and work, AI has also brought with it a darker side to digital transformation. The same capabilities that make AI a powerful generative tool also render it susceptible to exploitation by malicious actors. Ultimately, it means that hackers are able to scale their work and go for an even wider range of targets.

But while there are risks, there is also scope for AI to bolster cyber defence teams, as Kurt Sauer, CISO at DocuSign, points out. “I think it’s important to say that AI can…help people doing the defence work; it can help identify pertinent characteristics that you might not have been able to look at manually. I can certainly see an opportunity to scale security with AI.”

So, how might AI be able to help those who can’t afford to do security properly, or lack the knowledge needed to defend themselves? Sauer suggests it involves using AI to automate repetitive processes such as filling out questionnaires, or building incident response playbooks for resolving critical incidents. But there are ways to implement AI-enhanced security safely.

1.     Secure your data used to train AI models

Generative AI models rely on being fed data to generate responses and insights to queries or prompts – it’s why data has become the prime target for cyber attacks. To minimise risk of attacks, organisations should prioritise building trust and security into AI use. This means tasking your CISO with identifying and classifying sensitive data, and investing in data loss prevention software to prevent leaks while using AI, which can assist with data backup and recovery, encryption and authentication, and policy enforcement.

2.     Continuously scan for corruption

Carrying out regular and thorough scans for data corruption or malware across your digital ecosystem should come hand-in-hand with the use of AI. AI models make organisations vulnerable to malicious attacks. However, embracing a multi-layered security strategy encompassing firewalls, intrusion detection systems and endpoint protection, can create a robust barrier against them.

3.     Invest in AI-specific defensive tools

Traditional cybersecurity tools, while essential, often struggle to keep pace with rapidly evolving tactics of malicious actors. AI-defensive tools are armed specifically with machine learning algorithms and offer a much more dynamic and adaptive line of defence, due to their ability to analyse vast amounts of data in real-time, detecting subtle anomalies, and ultimately identify threats before they escalate.

There is now a greater demand for privacy and trust among organisations, which means ensuring individuals’ data is protected. Investing in digital identity solutions is key to this, providing a secure and efficient means of authenticating users and safeguarding sensitive information.

One overlooked area is electronic agreements – often a blindspot for organisations, and yet key to a holistic security strategy. Few organisations have sufficiently practical and secure electronic identity verification solutions in place – though AI powered solutions can swing back the pendulum in favour of defenders to help spot fraud.

The odds are in favour of cyber defenders—for now

With the average cost of a data breach costing companies $4.5m /£3.6m globally, AI-powered cybersecurity is soon to be as big an investment as AI itself. Automating threat detection can not only accelerate incident response time, but lighten the burden on human resources and allow for cybersecurity teams to focus on more complex defence methods. The timing of this is crucial, as 79% of cybersecurity professionals are deprioritising key tasks in order to stay on top of their workload, according to a cybersecurity service provider.

While it amplifies risk, AI has massive potential to scale and speed up security processes so that cybersecurity teams are better equipped to adapt to the evolving digitised landscape. For now, it remains a double-edged sword.