
ETSI EN 304 223: First Global AI Security Standard
ETSI EN 304 223 establishes the first global AI security standard with baseline requirements for enterprises deploying machine learning systems and AI agents.
The ETSI EN 304 223 standard establishes the first globally applicable European Standard for AI cybersecurity, introducing baseline security requirements that enterprises must integrate into their governance frameworks. Unlike traditional software security measures, this standard addresses AI-specific vulnerabilities that existing protocols often miss.
As organizations embed machine learning into core operations, the standard covers everything from deep neural networks and generative AI to basic predictive systems. It explicitly excludes only academic research applications, making it relevant for most production AI deployments.
Clear Role Definitions Solve Ownership Problems
A persistent challenge in enterprise AI adoption has been determining who owns security risk. The ETSI standard resolves this by defining three primary technical roles with specific obligations.
- Developers — responsible for model design, training data provenance, and security documentation
- System Operators — manage deployment infrastructure and runtime security controls
- Data Custodians — control data permissions, integrity, and usage alignment with sensitivity levels
For many enterprises, these roles overlap significantly. A financial services firm fine-tuning an open-source model for fraud detection functions as both Developer and System Operator. This dual status triggers comprehensive obligations across the entire AI lifecycle.
The inclusion of Data Custodians as a distinct stakeholder group directly impacts Chief Data Officers. These entities now carry explicit security responsibilities, effectively placing a security gatekeeper within data management workflows.
Security-First Design Requirements
The standard makes clear that security cannot be retrofitted at deployment. During design phases, organizations must conduct threat modeling that addresses AI-native attacks like membership inference and model obfuscation.
One key provision requires developers to restrict functionality to reduce attack surface. If a system uses a multi-modal model but only requires text processing, unused modalities represent managed risk.
- Asset inventory — comprehensive tracking of models, dependencies, and connectivity
- Shadow AI discovery — systematic identification of unmanaged model deployments
- Disaster recovery — specific plans for restoring "known good state" after AI-targeted attacks
- Minimal functionality — deploying only required capabilities from foundation models
This requirement forces technical leaders to reconsider the common practice of deploying massive, general-purpose foundation models where specialized alternatives would suffice.
Supply Chain Transparency Mandates
Supply chain security presents immediate friction for enterprises relying on third-party vendors or open-source repositories. The standard requires System Operators to justify and document security risks when using poorly-documented AI components.
Procurement teams can no longer accept "black box" solutions. Developers must provide cryptographic hashes for model components to verify authenticity.
- Training data documentation — source URLs and acquisition timestamps for public datasets
- Component verification — cryptographic hashes for all model elements
- Risk justification — documented rationale for using unverified components
- Audit trails — comprehensive logs for post-incident investigation
Where training data comes from public sources—common for Large Language Models—developers must maintain detailed acquisition records. This audit trail becomes critical for investigating potential data poisoning during training phases.
API Security Controls
Enterprises offering APIs to external customers must implement controls designed to mitigate AI-focused attacks. Rate limiting prevents adversaries from reverse-engineering models or overwhelming defenses to inject poisoned data.
Lifecycle Management and Monitoring
The standard treats major updates like retraining on new data as deployment of new versions. This triggers requirements for renewed security testing and evaluation with each significant model change.
Continuous monitoring extends beyond uptime metrics. System Operators must analyze logs to detect data drift or gradual behavior changes that could indicate security breaches.
- Version control — security evaluation for each major model update
- Drift detection — monitoring for gradual behavior changes indicating compromise
- Performance correlation — linking security metrics to model accuracy and reliability
- End-of-life procedures — secure data disposal when decommissioning models
The "End of Life" phase requires involving Data Custodians to ensure secure disposal of data and configuration details. This prevents leakage of sensitive intellectual property through discarded hardware or forgotten cloud instances.
Training and Awareness Programs
Compliance requires reviewing existing cybersecurity training programs. The standard mandates role-specific training ensuring developers understand secure coding for AI while general staff remain aware of threats like social engineering via AI outputs.
Implementation and Future Developments
Implementing ETSI EN 304 223 baselines provides structure for safer AI innovation. By enforcing documented audit trails, clear role definitions, and supply chain transparency, enterprises can mitigate adoption risks while establishing defensible positions for future regulatory audits.
The standard serves as a necessary benchmark alongside the EU AI Act, addressing the reality that AI systems possess specific vulnerabilities traditional software security often misses.
An upcoming Technical Report (ETSI TR 104 159) will apply these principles specifically to generative AI, targeting issues like deepfakes and disinformation campaigns.
Bottom Line
The ETSI standard shifts AI security from optional best practice to mandatory baseline requirement. For organizations building AI agents and systems, this represents both compliance overhead and competitive advantage—early adopters will have proven security frameworks when customers and regulators demand them.