The New Attack Surface: Why AI Is Your Infrastructure Now
AI models are no longer features. They’re infrastructure. Here’s how to defend them like it.
We used to think of infrastructure as something physical or virtual: Servers, storage, networks. But when you introduce AI, especially Large Language Models (LLMs), you’re dealing with something fundamentally different. This isn’t just another application component. LLMs behave in unpredictable, data-driven ways and can be compromised in ways we’re not used to. This piece looks at why our old security playbooks fall short and what we need to rethink when intelligence becomes part of the stack.
From Application Logic to Probabilistic Intelligence
Traditional systems follow clear logic. You give an input, you get a predictable output. With LLMs, you give an input and get... something plausible. The logic isn’t written in code anymore, it’s learned from data. That makes it hard to reason about, and even harder to secure.
Instead of wrapping a firewall around a system, you need to consider what the model is being fed. Prompts, plugins, documents retrieved at runtime. All of that becomes your new trust boundary.
This is why LLMs aren’t just APIs, they’re dynamic, fuzzy interpreters of context. And that context can be attacked.
The OWASP Top 10 for LLMs: Redefining Risk Categories
OWASP did a great job translating their classic Top 10 risks to LLM-based systems. It’s worth reading in full, but here are a few that stood out to me:
Prompt Injection (LLM01): Just like SQL injection, but using natural language. An attacker can tell the model to ignore the system prompt and do something else. More on OWASP Top 10
Insecure Output Handling (LLM02): You take model output and pass it into a browser or a shell without checking it. Bad idea.
Training Data Poisoning (LLM03): The AI equivalent of a supply chain attack. Feed the model poisoned examples during training and wait for it to misbehave later.
Excessive Agency (LLM08): Giving a model access to tools, emails, or file systems without tight controls. What could go wrong?
This isn’t just rebranding old risks. It’s a shift. We’re now securing behavior, not just code.
AI Platforms and the Collapse of Clear Responsibility Boundaries
In cloud, we got used to the Shared Responsibility Model. You handle your app and data. The provider handles the infrastructure. Clean split.
With AI PaaS, that split becomes blurry. You bring your own data to fine-tune a hosted model. Now whose job is it to:
Protect that data from poisoning?
Safeguard the resulting fine-tuned model?
Handle compliance if the model says something problematic?
We can’t just assume the cloud provider has it covered. As architects, we need to be very explicit in contracts and SLAs about where the boundaries are and fill the gaps ourselves.
Compliance as a Design-Time Constraint, Not a Postmortem
Most regulations weren’t written with AI in mind. But they still apply. Here’s where things get tricky:
GDPR Article 22: People can opt out of automated decisions. That means your AI needs explainability. Explainable Artificial Intelligence (XAI)
HIPAA: Your model must only see the data it truly needs. No over-collection. No lazy scoping.
SOX: If AI touches financial reporting, it’s part of the audit trail. You’ll need logging, model versioning, and change controls.
This stuff isn’t theoretical. It needs to shape your architecture upfront.
Defending Intelligence: The Role of Adaptive Perimeters
Static firewalls and rule-based WAFs won’t cut it anymore. LLMs can be attacked with subtle prompts, hidden payloads, and unusual traffic patterns. Your perimeter needs to get smarter.
That’s where adaptive or AI-enhanced WAFs come in. They:
Learn what normal traffic looks like
Spot strange patterns or spike behaviors
Enforce API contracts and throttle abuse
React in real time to new threats
I built a working version of this concept in a recent tutorial: Self-Learning WAF with Quarkus. It’s worth exploring.
Convergence of Disciplines: Security, MLOps, and Data Engineering
To secure an AI system, you need cross-functional thinking. Security folks have to learn prompt injection. MLOps folks need to track drift and adversarial behavior. Everyone has to get used to explaining what the model is doing.
New roles are emerging. "AI Security Engineer" is no longer hypothetical. It’s someone who understands models, prompt risks, and how to design guardrails that actually hold.
Strategic Implications for the Enterprise
This is not a “DevOps moment.” It’s more like a “New Stack moment.” If you’re running AI in production, you’re operating a probabilistic, semi-autonomous system with unclear attack vectors.
This changes things:
Budgets need to shift from traditional perimeter tooling to model telemetry and behavior monitoring
Vendors must be challenged on SLAs for fine-tuned models
Boards and compliance officers will need a crash course in prompt safety and RAG pipelines
AI is infrastructure now. Treat it with the same seriousness you treat your payment processing or identity systems.
Trust is the New Attack Surface
When we say “AI is part of the infrastructure,” we have to also say: trust is part of the threat model.
We’re no longer just securing systems that do what we tell them. We’re securing systems that interpret what we mean. That’s a whole new ball game.
The solution isn’t just technical. It’s architectural. We need clearer boundaries, stronger defaults, and shared language between roles. That’s the only way we’ll stay ahead.