Artificial intelligence (AI) now sits on both sides of the cybersecurity coin. Criminals are using it to automate phishing, while defenders rely on it to spot the tiny anomalies humans miss. Many organisations might be tempted to jump in as quickly as possible, but questions of accountability and governance must come first.
AI isn’t going away, so the question is, who controls the context? Executives are asking, ‘What exactly is the model doing, where is it used, and how do we keep it accountable?’ If you can’t answer that, you are already on the back foot.
AI’s value and the flipside of AI-enabled attacks
On the defender’s side, AI is quickly proving its worth, especially when it comes to vulnerability management. Machine-learning models surface weak signals that would take a human team hours and days to connect, within minutes. We get a clearer picture of risk while the window for attackers is still closing.
AI-driven analytics now provide continuous visibility across code repositories, cloud workloads, and employee endpoints.
That breadth used to demand multiple point tools, but now a single model can correlate it all.
Of course, the same capabilities can also be used for nefarious purposes, including the use of convincing deep-fake voicemails used in spear-phishing. No one should be naive to the fact that threat actors can also iterate faster than ever because generative AI reduces the cost of experimentation. Uninvited intelligence, or AI that slips into the environment without oversight, is a notable risk for businesses.
If you treat every model as a black box, you’re inserting uninvited intelligence into the business. You must know what data it trains on and how decisions are reached.
Despite AI’s immense processing power, context remains a human skill. “A model can flag anomalous traffic, but only an analyst can decide whether it’s malicious or a new business process. AI amplifies judgment; it doesn’t replace it,” said Kevin Halkerd, Risk and Compliance Manager at e4.
Governance, along with the technology
Overall, governance should be a defining and deciding factor when it comes to AI. The application of DevSecOps guardrails, secure-code scanning, and tiered access controls to every model deployed is non-negotiable. Halkerd says auditability should be embedded at the build stage, not after a breach has already happened.
Organisational AI checklists need to start early and address data quality, bias testing, explainability, and explicit ownership. You cannot secure what you cannot see, and you cannot justify what you cannot explain. Ultimately, governance needs to come before AI experimentation and deployment. It becomes difficult to decide what’s right when you don’t have guardrails, so one approach is to create them internally first.
Halkerd says how AI is viewed is a fundamental starting point. “AI is just one piece of the puzzle. The real power comes in how you’ve adapted it to your environment safely and effectively to enhance the security of your business.” Automation can also hard-wire assurance without overwhelming human teams. “Think of it as AI policing AI,” he says, where automated policy checks run alongside every model to scan for vulnerable code or unexpected outputs.
We must start navigating an AI world we don’t fully understand yet and build protection into every layer while we learn. Organisations should see AI as a tool that magnifies intent. Govern it well, and it multiplies your defences. Leave it unchecked, and it multiplies your risk.
See also: Cybersecurity, Gamified: Can fun build better habits and protect users?
Editor’s Note: Written by Fikile Sibiya, CIO at e4, a leading partner in digital transformation.