AI industry insiders launch site to poison the data that feeds them
Summary
A group of AI industry insiders, alarmed by current AI development, launched an initiative called Poison Fountain to actively undermine the technology by poisoning its training data. The project urges website operators to add links pointing to poisoned training data—such as code with subtle logic errors—that AI crawlers scrape. Inspired by Anthropic's research showing the practicality of data poisoning, the anonymous participants believe regulation is insufficient because the technology is already widespread. They view this as creating an 'information weapon' to compromise the cognitive integrity of AI models, aligning with concerns raised by figures like Geoffrey Hinton about the existential threat posed by machine intelligence. This effort exists alongside existing issues like 'model collapse,' where AI models degrade by training on synthetic or low-quality data.
(Source:Theregister)