News
🛑 When AI Hallucinates: A Journalist Wrongly Accused...
Artificial intelligence is a powerful tool, but what happens when it spreads false accusations? This is the unfortunate experience of a German judicial journalist, wrongly accused of crimes by Microsoft Copilot.
💻 The Case
🔹 A journalist asks Copilot about his own name, hoping to find articles from his cultural blog.
🔹 The AI falsely claims he is a convicted pedophile, an escaped psychiatric patient, and a scammer preying on widows.
🔹 Even worse, Copilot provides his address, phone number, and an itinerary to find him.
⚖️ The Consequences
🚨 Despite Microsoft’s removal of the false information, it reappeared a few days later.
🚨 Microsoft declined responsibility, arguing that its terms of use protect the company from liability.
🔎 Why Is This Concerning?
📌 Automated Defamation: Anyone could be falsely accused.
📌 Exposure of Personal Data: Copilot publicly shared personal data, violating GDPR regulations.
📢 Conclusion:
If even a journalist can have their identity linked to fictitious crimes, what about other professionals and citizens?
This incident highlights the potential dangers of AI when used without proper human oversight, emphasizing the need for rigorous verification mechanisms to prevent misinformation and protect individuals from such harm.