technologyneutral
AI Models: A Tiny Falsehood Can Cause Big Trouble
New York City, USATuesday, January 14, 2025
The researchers tried to fix the problem after the models were trained, but it didn’t work well. They also found that the existing tests for medical AI couldn’t detect these flawed models. But they didn’t give up and designed an algorithm that can catch medical misinformation. It’s not perfect, but it’s a step in the right direction.
This isn’t just about intentional misinformation. There’s so much false data online that’s accidently included in AI training. As AI becomes more common in internet searches, the risk of spreading wrong information gets bigger. Even trusted medical databases like PubMed aren’t safe. They might have outdated info that’s wrong but still gets picked up by AI.
Actions
flag content