The Transparency Requirements You're Probably Missing
Most discussions about the EU AI Act focus on high-risk obligations: conformity assessments, risk management systems, technical documentation. These are significant, but they apply to a relatively small subset of AI systems. Article 50’s transparency obligations apply far more broadly, and many businesses are overlooking them entirely.
Transparency requirements under the Act apply regardless of risk classification. If your AI system interacts with people, generates content, or processes biometric data, you have transparency obligations, even if the system is otherwise minimal-risk.
What Article 50 actually requires
Article 50 establishes four distinct transparency obligations:
1. AI interaction disclosure (Article 50(1))
If your AI system is “designed to directly interact with natural persons,” you must inform those persons that they are interacting with an AI system. The disclosure must be provided “in a clear and distinguishable manner at the latest at the time of the first interaction.”
This covers:
- Customer service chatbots
- Virtual assistants
- AI-powered phone agents
- Interactive AI features in apps and websites
- Any system where a person might reasonably believe they’re communicating with a human
The requirement is more specific than many businesses realise. “Clear and distinguishable” means the disclosure can’t be buried in terms of service, hidden in a footer, or presented in a way that most people won’t notice. It needs to be prominent and timely — before or at the moment the interaction begins.
What “good” looks like: A chatbot that opens with “I’m an AI assistant. How can I help you?” or a clear label visible throughout the conversation stating “You are chatting with an AI.”
What “bad” looks like: A footnote at the bottom of the page saying “This service may use artificial intelligence.” A disclosure buried three clicks deep in a help centre article. No disclosure at all because “it’s obvious it’s a chatbot.”
The Act does provide an exception: this obligation doesn’t apply where it is “obvious to a reasonably well-informed, observant and circumspect natural person” that they are interacting with AI. But relying on this exception is risky. What seems “obvious” to a technology company’s product team may not be obvious to a 70-year-old customer contacting support. The safe approach is to disclose explicitly.
2. Emotion recognition and biometric categorisation (Article 50(3))
If your AI system performs emotion recognition or biometric categorisation, you must inform the people it’s applied to. This applies to deployers. If you use a system that analyses facial expressions, voice tone, or other biometric signals to infer emotional states or categorise people by attributes like age, gender, or ethnicity, you must tell them.
This obligation is narrower in scope but frequently overlooked. Call centre analytics tools that assess caller sentiment through voice analysis are performing emotion recognition. Customer experience tools that categorise visitors by demographic attributes based on camera feeds are performing biometric categorisation. In both cases, the people affected must be informed.
Note also that emotion recognition in workplaces and educational institutions is prohibited outright under Article 5, with limited exceptions for medical and safety purposes. The Article 50 transparency obligation applies to emotion recognition that’s permitted in other contexts.
3. Synthetic content labelling (Article 50(2) and 50(4))
Providers of AI systems that generate synthetic content — text, images, audio, video — must ensure that the outputs are marked as artificially generated or manipulated, in a “machine-readable format” where technically feasible.
Deployers who use AI to generate or manipulate content that constitutes a deep fake (image, audio, or video content that “appreciably resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful”) must disclose that the content has been artificially generated or manipulated.
This has broad implications:
AI-generated images used in marketing, product listings, or social media must be labelled. If your marketing team uses Midjourney or DALL-E to create product images, those images need machine-readable markers and, in many contexts, visible disclosure.
AI-generated text that could be mistaken for human-written content triggers labelling obligations for the provider of the AI tool. As a deployer, if you publish AI-generated content (articles, product descriptions, reports) that would “falsely appear to be authentic or truthful,” you need to disclose it.
AI-modified media — images that have been retouched, video that’s been manipulated, audio that’s been synthesised — need to be marked.
There’s an exception for AI-generated content used for “obviously artistic, creative, satirical, fictional or analogous” purposes, though you must still disclose the use of AI. And content used in the detection or prevention of crime is also exempt.
4. Obligations for deployers of high-risk AI (Article 50(5))
Articles 50(1) through 50(4) apply in addition to any high-risk obligations. If your high-risk AI system also interacts with people, you need both the high-risk compliance framework and the transparency disclosures.
Where businesses typically fall short
The chatbot problem
The most common gap is customer-facing chatbots that don’t clearly identify themselves as AI. Many businesses have deployed AI chatbots that mimic human conversation patterns (using first-person pronouns, expressing empathy, asking personal questions) without any disclosure that the customer is talking to a machine.
Some businesses deliberately blur the line, believing that customers engage more naturally with a “human-sounding” bot. Under the AI Act, this isn’t optional. Disclosure is mandatory, and it must be clear and timely.
The fix is straightforward: add a visible, persistent label to the chat interface identifying it as an AI system, and include a disclosure in the chatbot’s opening message.
Generated content without attribution
Many businesses now use AI to generate marketing copy, social media posts, product descriptions, email campaigns, and blog content. Very few label this content as AI-generated.
The machine-readable labelling obligation falls primarily on the AI tool provider (OpenAI, Anthropic, etc.), but the deployer obligation for deep fakes — content that “would falsely appear to be authentic” — applies to the business publishing it. An AI-generated product review, an AI-written thought leadership article published under a human author’s name, or an AI-generated case study all raise questions under this provision.
The safest approach is transparency: if content was substantially generated by AI, say so.
Sentiment analysis in call centres
Many businesses use AI-powered call centre analytics that assess caller sentiment, detect emotions, or categorise callers by demographic attributes. These systems often run in the background without the caller’s knowledge.
Under Article 50(3), the caller must be informed. And if the system runs in a workplace context (monitoring employee performance via emotional analysis), it may be prohibited entirely under Article 5.
Biometric categorisation in physical spaces
Retail analytics systems that use cameras to estimate visitor demographics — age, gender, ethnicity — are performing biometric categorisation. If these systems are in place, the people being categorised must be informed. A sign at the entrance stating that AI-powered analytics are in use may be sufficient, but it needs to be clear and prominent.
Implementation checklist
To ensure you’re meeting Article 50 requirements:
For AI systems that interact with people:
- Add clear, visible AI disclosure labels to chat interfaces
- Include AI disclosure in the system’s opening message or interaction
- Ensure the disclosure is prominent — not hidden in footers or terms
- Review whether the “obvious” exception genuinely applies (in most cases, disclose anyway)
For emotion recognition and biometric systems:
- Identify all systems that assess emotional states or categorise by biometric attributes
- Verify none operate in prohibited contexts (workplace, education)
- Implement clear notification to affected individuals
- Document the legal basis for processing biometric data under GDPR as well
For AI-generated content:
- Inventory all uses of AI content generation across the organisation
- Implement machine-readable labelling where you are the provider
- Add disclosure for content that could be mistaken as human-created
- Establish editorial policies for AI-assisted content creation
For high-risk systems:
- Verify that transparency obligations are met in addition to high-risk compliance
- Don’t assume that high-risk compliance covers transparency — they’re separate requirements
The enforcement angle
Transparency violations carry penalties of up to €7.5 million or 1% of global annual turnover. These are lower than the penalties for high-risk compliance failures (up to €15 million or 3%) or prohibited practices (up to €35 million or 7%), but they’re still substantial.
More importantly, transparency violations are easy for regulators to detect. A mystery shopper who interacts with an undisclosed chatbot is straightforward evidence. AI-generated content without attribution is publicly visible. These are low-hanging fruit for enforcement actions, especially in the early days of the Act when regulators will be looking for visible demonstrations of their authority.
The transparency requirements are also among the cheapest to implement. Adding a label to a chatbot, disclosing AI-generated content, and informing people about biometric processing are operational changes, not engineering projects. There’s no good reason to be non-compliant on transparency.