Insights
AI Security: Risk vs. Resilience in Regulated Sectors
)
How Europe’s Most Trusted Industries Are Securing the Future of Artificial Intelligence.
AI is not just another tool in the cyber arsenal, it is both an accelerator and a risk factor. That message rang loud across Cloud & Cyber Security Expo Frankfurt 2025, part of Tech Show Frankfurt, where security and data leaders from highly regulated sectors debated a new reality: as AI transforms finance, healthcare and energy, it is also reshaping how risk and resilience are defined.
As one banking CSO noted: “AI helps us detect anomalies in real time - but every model we deploy is another surface that needs defending.”
Risk and resilience: two sides of the same coin
AI strengthens resilience when it is used to:
-
Detect and respond to cyber threats faster.
-
Automate compliance and anomaly monitoring.
-
Support rapid, evidence-based crisis decisions.
But it increases risk when:
-
Models are trained on unverified or non-sovereign data.
-
Outputs cannot be explained to regulators.
-
Attackers weaponise AI to automate intrusions or social engineering.
As one panellist put it: “The same algorithms that make systems smarter can also make attacks smarter.”
Inside regulated sectors
Healthcare: protecting patients while innovating
For healthcare providers, AI promises better outcomes - but data sensitivity is paramount. Leaders from Fresenius Medical Care described the challenge: “We see huge potential in AI, but we can’t compromise on data protection. Storing patient data must be compliant, explainable, and defensible.”
Under the EU AI Act, such medical applications are classed as high risk, requiring traceability, human oversight and documentation. Hospitals are now building AI audit trails and explainability dashboards so regulators can verify each model’s decisions.
Finance: layered defence and accountability
In banking, AI underpins fraud detection and credit scoring, yet the same tools must sit within robust control structures.
A Deutsche Bank CSO explained: “Every AI system we deploy sits inside a three-tier protection model - encryption at every layer, real-time SOC oversight and zero-trust segmentation.”
Financial institutions increasingly treat AI models like core infrastructure, applying strict ownership, change control, and regulatory reporting frameworks.
Aviation and manufacturing: explainability equals safety
At the Digital Transformation Theatre, Boeing specialists highlighted how predictive AI prevents equipment failure, but trust depends on transparency:
“When AI predicts a part failure, we can’t just accept the answer. We need to know why. Explainability is safety.
Regulation demands transparency
Across Frankfurt, experts agreed: regulators want proof, not promises. The EU AI Act now requires high-risk models to be transparent, traceable, and auditable from data sources to decision logic.
This is driving organisations to implement:
-
Explainability dashboards showing how outcomes are reached.
-
Model registries logging all versions and inputs.
-
Audit-ready documentation for internal and external review.
A legal advisor from the automotive sector summarised it simply:
“If you can’t explain how your model behaves, you can’t defend it.”
Treating AI models as critical assets
Resilience now means extending governance to the AI layer. Leading organisations are:
-
Cataloguing models and ownership - mapping all AI assets to responsible teams.
-
Securing pipelines - enforcing encryption, access control and runtime monitoring.
-
Ensuring explainability - using dashboards and logs to visualise decision paths.
-
Preparing AI incident playbooks - for data poisoning, bias or model drift.
-
Testing continuously - running adversarial simulations to expose weak points.
This approach mirrors classic information-security frameworks, but tuned to protect living, evolving AI systems.
From compliance to confidence
Despite the challenges, optimism ran through Frankfurt’s sessions. Many now view AI governance as a business enabler, not a blocker. Transparent and accountable systems build trust with regulators, partners and the public.
Dr. Swantje Westpfahl of the Institute for Security and Safety captured it well:
“AI must be treated not as a black box, but as part of the critical infrastructure that underpins trust. If we secure AI, we secure the future.”
The key question for boards
As AI becomes central to Europe’s most trusted sectors, one question defines the path ahead:
Can your organisation secure the AI that secures everything else?