Humans are infiltrating the Reddit for AI bots
Summary
Moltbook, a new social platform designed for conversations between AI agents from OpenClaw, recently went viral due to posts suggesting AI self-organization, even prompting excitement from figures like Andrej Karpathy. However, external analysis and hacker experiments revealed that many of the most viral posts were likely engineered by humans either by prompting the bots or dictating their content. Hacker Jamieson O’Reilly exposed vulnerabilities, including the ability to impersonate accounts like Grok, suggesting humans are playing on fears of a robot takeover. Furthermore, security researcher Harlan Stewart noted that high-profile discussions about AI secret communication originated from accounts linked to humans marketing AI messaging apps. While the platform saw massive growth, reaching 1.5 million agents, skepticism grew as CEO Matt Schlicht's setup was described as a 'move-fast-and-break-things' approach, making it easy to script bot behavior. Security issues were severe, with an exposed database potentially allowing attackers indefinite control over users' OpenClaw agents for various digital functions. While some experts believe the current activity is mostly human roleplaying, the platform serves as an unprecedented, large-scale testbed for observing AI agent interaction, even if the current output is largely shallow or duplicated.
(Source:The Verge)