June 16, 2025 – RomeItaly’s antitrust authority, AGCM, has formally opened an investigation into Chinese artificial intelligence firm DeepSeek—known for its generative AI chatbot—over concerns that users are not being adequately warned about the potential for false or misleading outputs, a phenomenon widely known as “AI hallucinations” .⸻⚠️ What triggered the probe? • AGCM alleges DeepSeek failed to provide clear, immediate, and intelligible warnings about the risk of hallucinations—instances where the AI generates invented or inaccurate information in response to user queries . • The watchdog is evaluating whether DeepSeek violated consumer protection regulations by omitting critical disclaimers concerning the AI’s limitations and potential misinformation.⸻🛡️ Broader regulatory backdrop • This antitrust investigation adds to an earlier privacy-focused action: in February, Italy’s data protection authority (Garante) ordered that the DeepSeek chatbot be blocked in Italy, citing inadequate explanations of its privacy policy and data-handling practices . • Italy has taken a leading role in assessing AI platforms, having previously temporarily banned ChatGPT due to privacy concerns under GDPR.⸻🚨 Implications for DeepSeek • Depending on the outcome, AGCM’s probe could result in administrative fines, forced labeling mandates, or restrictions on DeepSeek’s services within Italy. • More significantly, this could set a precedent across the EU, requiring generative AI developers to incorporate transparency measures—particularly around hallucination warnings—that some EU states have already begun enforcing.⸻🌍 What’s next? • AGCM will assess DeepSeek’s user-facing materials and in-app disclosures. • The watchdog may demand that DeepSeek implement stronger safeguards and explicit user notifications, ensuring transparency about model accuracy and risks. • DeepSeek has declined to comment on the probe, and no timeline has been announced for any resolution or potential penalties.⸻This move aligns with Italy’s increasingly proactive stance on AI risks—from data privacy to truthfulness—highlighting growing global pressure on developers to prioritize transparency and reliability in AI systems.