Tech Show Frankfurt
How Secure Is Your AI Supply Chain? Spotting backdoors, poisoned data & hidden dependencies
04 Jun 2025
Cloud AI & Cyber Security
Ensuring and Applying Cloud Security
Pre-trained models from hubs like Hugging Face and TensorFlow promise acceleration, yet they can hide backdoors, bias and outright sabotage. Add in poisoned public datasets, dependency-confusion attacks on software packages, and misconfigured CI/CD pipelines, and you have an ideal playground for adversaries.
This talk unpacks the five weakest links of today’s AI ecosystem—manipulated models, data poisoning, package hijacking, insecure model-deployment pipelines, and the near-total absence of SBOMs for models—then shows practical checks you can adopt right now. Attendees leave with a toolkit of low-friction steps to lock down their machine-learning supply chain before an attacker does it for them.