Interview

AITech Interview with Dr. Peter Garraghan, CEO and Co-founder at Mindgard

AITech Interview with Dr. Peter Garraghan, CEO and Co-founder at Mindgard

AI security demands new strategies. A sharp look at the evolving risks and why runtime testing must lead the way.

Dr. Garraghan, your career has been deeply rooted in AI security. What initially drew you to this field, and how did that journey lead to the founding of Mindgard?

Prior to founding Mindgard, I’d spent over a decade seeking to better understand how developing AI and machine learning systems were reshaping the world in powerful new ways. Most of this was done within the context of academia. I’m a professor of computer science at Lancaster University and a fellow of the UK Engineering and Physical Sciences Research Council (EPSRC). The research I did at the university led me to the inevitable conclusion that conventional application security methods weren’t equipped to properly address AI-specific risks. This led me to assemble an R&D team to develop the first commercial security tool for AI models. Those efforts eventually evolved into Mindgard and our enterprise-ready solution.

AI adoption is accelerating, but so are its security risks. Could you outline some of the most critical threats that AI systems face today, and why they are more complex than traditional cybersecurity challenges?

The first things that come to mind when we think about threats to AI systems are prompt injection, model theft, and data leakage, but these are only part of the whole picture. We’ve seen attackers manipulate AI to extract sensitive information, bypass safety guardrails, and clone proprietary models at a fraction of the original cost.

What separates AI cybersecurity from traditional cybersecurity is the very nature of AI systems. Because AI is always evolving, so are the vulnerabilities and the methods through which attackers can exploit those vulnerabilities. AI even accelerates and scales existing threats while introducing new attack vectors: deepfakes, automated reconnaissance, hyper-personalized phishing emails with data pulled from multiple sources, and generative AI-enabled fraud. So AI doesn’t just have a more complex set of threats to itself, but also complicates traditional cybersecurity challenges. This is critical for cybersecurity professionals to understand—and address.

Attacks like prompt injection and adversarial inputs are evolving rapidly. How do these threats exploit AI vulnerabilities, and what makes them particularly difficult to mitigate using conventional security tools?

These attacks exploit AI vulnerabilities by manipulating model behavior. These threats evolve rapidly, taking advantage of AI’s reliance on untrusted inputs and its opaque, black-box like, decision-making. Traditional security solutions, like code scanners or firewalls, fail to address these risks because vulnerabilities emerge at runtime rather than in static code. The only way to truly mitigate these threats is with continuous, automated security testing that’s always learning from what it identifies and defends against. Conventional security tools aren’t designed to work continuously in this matter, let alone learn from their own functions.

Mindgard’s DAST-AI solution is positioned as a game-changer in AI security testing. What sets it apart from traditional security approaches, and how does it address vulnerabilities that WAFs and static analysis tools miss?

Our Dynamic Application Security Testing for AI (DAST-AI) methodology simulates real-world attack scenarios to identify vulnerabilities in a running application. It’s the first comprehensive testing solution to apply the concepts of DAST to AI. This makes it ideal for catching runtime vulnerabilities before and during continuous deployment cycles. Acting as an outside attacker on a running AI system, it tests the LLM, RAG, and all the other components that make up the entire operational model. It catches vulnerabilities that WAFs and static analysis tools miss precisely because it tests an instantiated model. Certain vulnerabilities that only surface when a system is running, and DAST-AI can catch these. Static code analysis simply can’t do that, and it would be hugely expensive to do it via manual testing.

You’ve emphasized the need for continuous AI security testing. In practical terms, how can organizations integrate security testing throughout the AI lifecycle without hindering innovation and deployment speed?

Security testing takes time and effort. Once adequate continuous testing systems are in place, however, there’s no reason for their maintenance to hinder innovation or deployment speed. If anything, such testing systems maintain the pace of innovation because they keep teams from having to put out time-consuming AI cybersecurity fires. 

Compliance regulations like the EU AI Act and ISO 42001 are pushing organizations to rethink their AI security strategies. How should businesses align security efforts with these evolving regulatory frameworks?

All related regulations and actions essentially signal that AI security frameworks are maturing. These measures will actually make it easier for businesses to align their security efforts accordingly, since they’re changing the approach from ad-hoc security measures to continuous, standardized risk assessment. Whatever the future of compliance holds, it’s going to require organizations to demonstrate security testing, governance, and auditability of their AI systems. Introducing dynamic testing to ensure that AI systems remain secure is the best thing businesses can do right now.

There’s often a gap between cybersecurity teams and AI development teams. How can organizations foster better collaboration between these groups to build more secure AI systems?

Some of that comes down to good organizational management, and some of that comes down to better training around AI. I find myself regularly demystifying AI to established technologists who are otherwise experts in infrastructure security and data protection. AI development teams need to work side-by-side with cybersecurity teams. A fundamental disconnect between the two is a vulnerability in and of itself. Cybersecurity teams need to understand that AI introduces unique vulnerabilities that differ from traditional systems, and that the threats from AI behavior are higher and harder to test when compared to other software.

Mindgard is backed by over a decade of AI security research. How has your team’s academic expertise shaped the development of your security solutions, and what advantages does this bring to your customers?

Our academic expertise puts intense R&D at the very heart of our company. I oversee a significant portion of the UK’s AI security PhD candidates, and we actively tap into that talent pool to drive innovation. The academic expertise at our disposal is behind our development of the most comprehensive threat library in the industry and the first-ever DAST-AI methodology. The advantage is a combined decades’ worth of cutting-edge academic research funneled into real-world applications. That’s just not something other companies can offer.

Looking at the bigger picture, what shifts do you anticipate in AI security over the next decade, and what should organizations be doing today to prepare for these changes?

We’re in a watershed moment for AI, and that includes AI security. Inevitable, high-profile security incidents that happen now and in the near future will continually underscore the essential nature of comprehensive AI security. The best thing organizations can do is invest in their own autonomous, continuous AI security systems. We’re going to see an increased global appetite for R&D around the problems of AI security, if only because we’re going to see increasingly complex threats. We’re also going to see more mandates for it coming from the top. Governments can’t ignore this any longer, nor will they want to.

AI security isn’t just a technical challenge—it’s also an organizational and societal one. What mindset shifts do businesses need to make to treat AI security as a fundamental, ongoing priority rather than an afterthought?

Recent reports show that security and data privacy concerns are the main hindrances preventing enterprises from implementing AI in their operations, and that many CTOs don’t feel properly equipped to lead AI integration efforts. There’s an enormous need for training within organizations themselves. Demystification is key. AI becomes far less inscrutable with a reasonable amount of training. This is important because the best AI security solutions must balance automation with oversight, and the overseers need to know what they’re looking for. The requisite mindset shift is one that’s just open to learning and understanding. The rest will take care of itself. 

A quote or advice from the author: “AI security isn’t just about technology. It’s about understanding how AI changes the nature of cyber threats and ensuring security evolves just as fast.”

Dr. Peter Garraghan

CEO & Co-founder at Mindgard

Dr. Peter Garraghan is CEO & co-founder at Mindgard, the leader in Artificial Intelligence Security Testing. Founded at Lancaster University and backed by cutting edge research, Mindgard enables organizations to secure their AI systems from new threats that traditional application security tools cannot address. As a Professor of Computer Science at Lancaster University, Peter is an internationally recognized expert in AI security. He has devoted his career to developing advanced technologies to combat the growing threats facing AI. With over €11.6 million in research funding and more than 60 published scientific papers, his contributions span both scientific innovation and practical solutions.

Mindgard is the leader in Artificial Intelligence Security Testing. Founded at Lancaster University and backed by cutting edge research, Mindgard enables organizations to secure their AI systems from new threats that traditional application security tools cannot address. Its industry-first, award-winning, Dynamic Application Security Testing for AI (DAST-AI) solution delivers continuous security testing and automated AI red teaming across the AI lifecycle, making AI security actionable and auditable. For more information, visit mindgard.ai

AI TechPark

Artificial Intelligence (AI) is penetrating the enterprise in an overwhelming way, and the only choice organizations have is to thrive through this advanced tech rather than be deterred by its complications.

Related posts

AITech Interview with Kenta Yasukawa, Co- Founder and Chief Technology Officer at Soracom

AI TechPark

Interview with Carla Leibowitz, Chief BD Officer at Paige AI-derived Cancer Biomarker Testing Company

AI TechPark

AITech Interview with Kiranbir Sodhia, Senior Staff Engineering Manager at Google

AI TechPark